id
stringlengths 3
9
| source
stringclasses 1
value | version
stringclasses 1
value | text
stringlengths 1.54k
298k
| added
stringdate 1993-11-25 05:05:38
2024-09-20 15:30:25
| created
stringdate 1-01-01 00:00:00
2024-07-31 00:00:00
| metadata
dict |
|---|---|---|---|---|---|---|
253988964
|
pes2o/s2orc
|
v3-fos-license
|
Bio-mimetic high-speed target localization with fused frame and event vision for edge application
Evolution has honed predatory skills in the natural world where localizing and intercepting fast-moving prey is required. The current generation of robotic systems mimics these biological systems using deep learning. High-speed processing of the camera frames using convolutional neural networks (CNN) (frame pipeline) on such constrained aerial edge-robots gets resource-limited. Adding more compute resources also eventually limits the throughput at the frame rate of the camera as frame-only traditional systems fail to capture the detailed temporal dynamics of the environment. Bio-inspired event cameras and spiking neural networks (SNN) provide an asynchronous sensor-processor pair (event pipeline) capturing the continuous temporal details of the scene for high-speed but lag in terms of accuracy. In this work, we propose a target localization system combining event-camera and SNN-based high-speed target estimation and frame-based camera and CNN-driven reliable object detection by fusing complementary spatio-temporal prowess of event and frame pipelines. One of our main contributions involves the design of an SNN filter that borrows from the neural mechanism for ego-motion cancelation in houseflies. It fuses the vestibular sensors with the vision to cancel the activity corresponding to the predator's self-motion. We also integrate the neuro-inspired multi-pipeline processing with task-optimized multi-neuronal pathway structure in primates and insects. The system is validated to outperform CNN-only processing using prey-predator drone simulations in realistic 3D virtual environments. The system is then demonstrated in a real-world multi-drone set-up with emulated event data. Subsequently, we use recorded actual sensory data from multi-camera and inertial measurement unit (IMU) assembly to show desired working while tolerating the realistic noise in vision and IMU sensors. We analyze the design space to identify optimal parameters for spiking neurons, CNN models, and for checking their effect on the performance metrics of the fused system. Finally, we map the throughput controlling SNN and fusion network on edge-compatible Zynq-7000 FPGA to show a potential 264 outputs per second even at constrained resource availability. This work may open new research directions by coupling multiple sensing and processing modalities inspired by discoveries in neuroscience to break fundamental trade-offs in frame-based computer vision1.
Evolution has honed predatory skills in the natural world where localizing and intercepting fast-moving prey is required. The current generation of robotic systems mimics these biological systems using deep learning. High-speed processing of the camera frames using convolutional neural networks (CNN) (frame pipeline) on such constrained aerial edge-robots gets resource-limited. Adding more compute resources also eventually limits the throughput at the frame rate of the camera as frame-only traditional systems fail to capture the detailed temporal dynamics of the environment. Bio-inspired event cameras and spiking neural networks (SNN) provide an asynchronous sensor-processor pair (event pipeline) capturing the continuous temporal details of the scene for high-speed but lag in terms of accuracy. In this work, we propose a target localization system combining event-camera and SNNbased high-speed target estimation and frame-based camera and CNN-driven reliable object detection by fusing complementary spatio-temporal prowess of event and frame pipelines. One of our main contributions involves the design of an SNN filter that borrows from the neural mechanism for egomotion cancelation in houseflies. It fuses the vestibular sensors with the vision to cancel the activity corresponding to the predator's self-motion. We also integrate the neuro-inspired multi-pipeline processing with taskoptimized multi-neuronal pathway structure in primates and insects. The system is validated to outperform CNN-only processing using prey-predator drone simulations in realistic D virtual environments. The system is then demonstrated in a real-world multi-drone set-up with emulated event data. Subsequently, we use recorded actual sensory data from multi-camera and inertial measurement unit (IMU) assembly to show desired working while tolerating the realistic noise in vision and IMU sensors. We analyze the design space to identify optimal parameters for spiking neurons, CNN models, and for checking their e ect on the performance metrics of the fused system. Finally, we map the throughput controlling SNN and fusion network on edge-compatible Zynq-FPGA to show a potential outputs per second even at constrained resource availability. This work may open new research directions by coupling multiple sensing and processing modalities inspired by
. Introduction
Predatory animals can quickly detect and chase their prey by triggering locomotion to intercept it. Such a behavior involves the visual input for the identification of the prey as well as distinguishing the predator's self-motion (ego-motion) from the relative motion of the steady surroundings ( Figure 1A). Cheetahs have been recorded to run at 25 ms −1 (Wilson et al., 2013) and their preys also move at comparable speeds within the Field of View (FoV). Successful hunting relies on advanced neural circuits that accept the incoming data from the visual and inertial sensory organs and process it to enable realtime locomotion actuation ( Figure 1A). This closed-loop control system across different cortices is capable of highly parallel processing and achieves high power efficiency, speed, and accuracy simultaneously (Sengupta and Stemmler, 2014). Such a biological neural system is optimized over generations through evolution and can be an inspiration to address engineering applications, for instance, the high-speed target localization for autonomous drones under constrained computing resources.
The state-of-the-art method for object detection uses convolutional neural networks (CNN) due to its high accuracy (Zhao et al., 2019;Jiao et al., 2019). Although CNNs are also bio-inspired and have emerged from the layered connectivity observed in primate brain (Cadieu et al., 2014;Khaligh-Razavi and Kriegeskorte, 2014;Güçlü and van Gerven, 2015), the computation gets increasingly intense with larger networks. The models are typically large (Bianco et al., 2018) along with considerable processing latency that puts a limitation on the throughput (outputs per second or frames per second-FPS) of the computation. Light-weight models trade-off the accuracy for latency (Howard et al., 2017). The latency can be reduced while preserving the accuracy by equipping more powerful computing hardware on the drones (Duisterhof et al., 2019;Wyder et al., 2019;Falanga et al., 2020). But the edge computing platforms on a drone usually come with limited accessible power due to the energy density of batteries, which eventually limits the speed and throughput of the computation. Therefore, the traditional frame-based pipeline with frame-based camera This work is carried out when all authors were at Georgia Institute of Technology.
(optical camera) and CNNs suffer from the trade-off between computational latency and accuracy for multiple real-time visual tasks including segmentation , object detection (Huang et al., 2017), and gender detection (Greco et al., 2020).
On the other hand, spiking neural networks (SNNs) that represent a new paradigm of artificial neural networks attempt to computationally model biological neural systems. Spiking neural networks exhibit low power consumption in customized hardware platforms (Akopyan et al., 2015;Davies et al., 2018) by the exploitation of asynchronous decentralized tile-based designs. Spiking neural networks have been demonstrated to work for object detection of simple shapes (Cannici et al., 2019) using training methods like approximate backpropagation (Lee et al., 2019;Zhang and Li, 2019) and spike-time-dependentplasticity (STDP) based training (Diehl and Cook, 2015). Recently proposed bio-mimetic event-based vision cameras called dynamic vision sensors (DVS) boost the potential of SNN-based visual processing even further by matching it with the sensor of a similar modality . The regular optical camera lacks in taking full advantage of SNNs because of its discrete frame generation structure where the time-based computation of spikes cannot be fully exploited. The DVS overcome it by allowing continuous-time input generation in the form of events. An event gets generated when the intensity of a pixel in the FoV of the camera changes. Event generation corresponding to all the pixels takes place in parallel and asynchronously, thus sensing only the motion of objects in the FoV saving circuit resources and improving bandwidth. This event-based data flow can be processed by SNNs with matching data modality. Dynamic vision sensor offers low power consumption suited for edge-applications which coupled with high-speed is applied in tasks like robotic goalie (Delbruck and Lang, 2013) and looming object avoidance (Salt et al., 2017). This makes DVS and SNN-based processing (event pipeline) perfectly suited for a task like predation where low-power and high-speed requirements are presented simultaneously.
Spiking neural network frameworks, however, can hardly achieve the same level of detection accuracy compared to their CNN counterparts because of the lack of reliable training methods. Very deep networks cannot be trained easily and reliably because of the non-differentiability of spikes (Lee et al., 2020). Although some attempts using conversion of trained Frontiers in Neuroscience frontiersin.org . /fnins. .
FIGURE
ANN-to-SNN (Kim et al., 2020) provide decent accuracy, the complexity of the network negates the speed advantage of the network. Newer methods with objective functions involving smoothened spikes (Lee et al., 2020) and target spike trains (Shrestha and Orchard, 2018) are proposed but are typically applied to simpler problems (Yin et al., 2021). Spiking neural networks, therefore, lie in the region of low accuracy and low latency (Kim et al., 2020;Cannici et al., 2019). The previous literature shows this clearly as shown in Figure 1B for different SNNs (Chowdhury et al., 2021;Rathi and Roy, 2021;Wu et al., 2021;Zheng et al., 2021;Meng et al., 2022). Simultaneously CNN (Bianco et al., 2018) configurations achieve higher accuracy levels at the cost of slower processing on NVIDIA Jetson TX1. Figure 1B shows results corresponding to Imagenet classification dataset. Imagenet is chosen as it is reasonably complex and is used frequently to benchmark SNN performance. The SNN latency is calculated by using number of time steps for inference with each timestep consisting of 1 ms of synchronization (Merolla et al., 2014). Figure 1B makes it evident that CNNs has a potential to deliver higher accuracy at a lower speed whereas SNNs are capable of providing high-speed if the accuracy can be traded off. Thus, we identify two couples of sensor and processing networks with complementary prowess. Convolutional neural networks and optical cameras show high accuracy by capturing detailed spatial resolution whereas SNN and event-cameras show highspeed processing by capturing of temporal dynamics of the scene. Thus, the processing alternatives present an interesting trade-off between event and frame pipelines ( Figure 1B). Our work proposes to overcome this trade-off by using a highspeed ego-motion filter using event pipeline for fast target estimation assisted by the optical camera and CNN based reliable object detection for corroborating the identified position ( Figure 1C). The continuously operating SNN filter checks for fast-moving target (prey drone in this case) entering the FoV at all times while the CNN network gets activated at a lower frequency confirming or refuting the presence of the identified prey. The two systems operate in parallel allowing the predator drone to exploit latency and accuracy advantage concurrently. Our approach may find some similarity with the combined event and frame sensing for object detection where the event-stream determines the area of interest for CNN. Similarly, another fused approach for optical flow (Lee et al., 2021) combined the sensor outputs in a single CNN-like pipeline. However, both these approaches use CNN backbone. Therefore, the throughput limitation put in by CNN remains and the advantage of event-camera is not fully utilized. Using our multi-pathway approach, we propose . /fnins. .
to cover a high-resolution spatial domain for prey detection while quickly transferring to a continuous-time domain for high-speed target localization using the insights from the visual systems of predatory animals. The ego-motion of the moving predator induces events for stationary objects caused by the DVS. Most animals are known to filter out activity caused by ego-motion using different kinds of sensory feedback systems (Kim et al., 2015). It was proposed that vestibular (inertial) feedback signals through inhibitory connections compensate for the ego-motion in insects and primates (Zhang and Bodznick, 2008;Benazet et al., 2016). This was experimentally demonstrated recently, in Kim et al. (2015), where the vestibular sensor induced self-motion cancelation was observed by probing the neurons in houseflies. (1) Our first contribution lies in the design and implementation of a bioinspired SNN-based ego-motion cancelation filter fusing eventbased vision with vestibular and depth information. Our SNN filter removes the activity generated by the ego-motion leaving only the events corresponding to the moving prey by mimicking the neuro-biological counterparts. The loss of accuracy in the noisy SNN filter is compensated by a highly accurate CNNdriven object detector which captures and processes the RGB image periodically to validate the SNN estimate. Therein lies the second key contribution of this article. (2) We propose a close interplay between CNNs and SNNs by coupling spatio-temporal consistency criterion with a neuro-inspired model. This coordination between multiple pipelines in different phases of chasing is inspired by the use of specialized neuronal clusters in different phases of hunting in Larvae zebrafish (Förster et al., 2020). The separation between a locally fast (event pipeline) and globally slow signal (frame pipeline) is similar to primate vision (Mazade et al., 2019). Our algorithm relies on a CNN to detect and identify the prey when it is far and a longer detection latency is acceptable, and gradually hands over the task to the SNN as the predator starts to approach the prey and a shorter latency for fast-tracking is of the essence. Our multi-pipeline processing with color information (frame pipeline) for accuracy and motion information (event pipeline) for speed emerges from the similar color and motion separation in visual processing of primate and insect vision (Gegenfurtner and Hawken, 1996;Yamaguchi et al., 2008).
The algorithm is verified in a three-step process. In the first step, we implement it on a programmable drone environment-Programmable Engine for Drone Reinforcement Learning Applications (PEDRA) (Anwar and Raychowdhury, 2020) with different environments of varying level of obfuscation and multiple evasive trajectories for the prey. In the second stage, the algorithm is implemented on a real drone in both indoor and outdoor environments. The prey drone is manually flown in front of the closed-loop autonomous predator drone while the DVS data is emulated from the frame-based images captured by the onboard camera of the predator. Finally, we record a prey flight using a hybrid camera assembly with on-board inertial measurement unit (IMU) and process it to show accuracy preserving high-speed computation tolerating real-world sensor noise. The design space is explored to tune optimal parameters for the SNN, appropriate model for the CNN and their impact on the interplay on the fused CNN+SNN system. Finally, we estimate the circuit level cost of implementing such a system on a edge-compatible FPGA to show a potential throughput of > 264 outputs per second. This work shows the conjunction of SNN with more established CNNs for specialized high-speed processing. This work may open a new research direction by coupling parallel sensing and processing modalities to break fundamental trade-offs in frame-based computer vision.
. Methodology . . Target estimation -ego-motion cancelation using SNN Identification of the prey from the cluttered event stream requires separation of events corresponding to ego-motion and their efficient cancelation. Model-based optimization methods like contrast maximization (Gallego and Scaramuzza, 2017;Rebecq et al., 2017), feature tracking (Kueng et al., 2016;Zihao Zhu et al., 2017), or deep learning techniques (Alonso and Murillo, 2019;Mitrokhin et al., 2019) have been used for ego-motion cancelation and moving object detection in event cameras. However, these methods require iterative optimizations and multiple memory accesses lowering the speed of computation. Secondly, our method uses CNN for accuracy compensation. Therefore, high-speed requirement takes precedence over accuracy for event pipeline and we rely on bio-inspired faster alternatives while allowing compromise in accuracy. The performance of a object detection is typically measured using the overlap between the ground truth and predicted bounding boxes. The target localization task at hand requires actuating the predator with appropriate velocity and rotation depending upon the region in which the target is present. Therefore, an accurate detection is the one where the output of SNN and CNN lies within a threshold of pixel distance from the actual position of the prey drone. This easier definition of accuracy allows measurement in terms of percentage of correct localizations as used in the rest of this article.
The events-accumulated frame generated by the event stream from the event camera in a time window is shown in Figure 2A. The independent rapid motion of the prey creates a denser cluster of events around it as seen in the image. Other events are generated by the stationary objects within the scene and should be canceled. The higher self-velocity of the predator generates more events corresponding to stationary objects. Therefore, activity cancelation needs to be proportional to the predator's self-velocity. Secondly, the reliance on the event pipeline is higher when the prey is close to the predator where .
/fnins. . it can quickly evade and escape the FoV. This is because the time of escaping the FoV is long when the prey is at a longer distance, and slower detections from frame pipeline are more reliable. Therefore, the SNN filter needs higher accuracy when the distance between the prey and predator is small. Therefore, the events at a higher depth from the predator are canceled out to boost activity in the close vicinity. This cancelation strategy is illustrated in Algorithm 1. Every continuous patch of active pixels requires a fixed number of events to be canceled from it. This cancel mask is denoted by "cancel." The pixel array is denoted by "p" where pixel values are either 0 or 1. This is proportional to the selfvelocity of the predator and the depth of the pixel undergoing the cancelation operation ( Figure 2B). v H and v V denote the scalar horizontal and vertical component of the predator motion including velocity and rotation which is called self-velocity in the article. This is acquired through the onboard IMU of the event camera. The depth is acquired from a stereo camera which provides depth for every pixels in meters. The velocity and depth are both normalized using empirically found multipliers to make them dimensionless for addition in Algorithm 1. Figure 2 shows the cancelation strategy with the number of events to be canceled at every position shown in Figure 2C. With the prey motion being faster than the steady environment, the activity corresponding to the prey persists even after the cancelation while the activity corresponding to the stationary background gets canceled. Figure 2D shows the image after canceling out the ego-motion generated events. Horizontal and vertical binned histogram computation of the number of surviving pixels in this image gives the approximate position of the prey. However, this analysis relies on an event accumulated framebased computation which adds an additional overhead of frame accumulation on the asynchronous event stream from the DVS camera. Processing of the incoming events in the matched asynchronous modality offers higher speed and energy efficiency in the sparse computation effort. This is because accumulating the frame followed by cancelation (matrix operations on n × n matrix) adds O(n 2 + m) complexity where m is the number of events. On the other hand, processing the events independently allows the speed of O(m). Therefore, we propose a four-layered SNN for processing of Algorithm 1 in real-time. The network gets its inspiration from recent neuro-biological discoveries explained in Section 4.
Every incoming event carries its location (x, y), time of generation (t), and polarity (p) feeding to the input layer of the network shown in Figure 3A. Each spiking neuron obeys the integrate and fire (IF) dynamics shown in the following equations.
The summation term corresponds to the incoming current from the connected neurons (denoted by i) that spiked the previous time instance. The synaptic weight from neuron "i" to the neuron being updated ([x,y]) is denoted by W i The spiking of a neuron is denoted by S where S = 1 if the membrane potential exceeds the spiking threshold (V th ). The input from the previous synapses drives the output neuron at the immediate next time step. This avoids the incorporation of the synaptic delays and computation of time-delayed currents simplifying the computation.
The first layer takes in the event stream from the event camera ( Figure 3A). This is connected to the next layers for vertical (Layer 2V) and horizontal (Layer 2H) event cancelation. Every neuron in the DVS layer drives "span" neurons above it .
/fnins. . in Layer 2V and "span" neurons on the right of it in Layer 2H with synapses of unit weight. Layer 2 is also driven by velocity encoding neurons and depth encoding neurons. Both velocity neurons and depth neurons are connected using inhibitory synapses. The predator's self-velocity needs to be calculated using accelerometer readings from the IMU in the current step and is converted to multi-neuron spiking activity by discretizing it given by v H and v V and is connected to layer-2 using inhibitory synapses. Every velocity neurons is connected to all neurons in layer 2. Depth neurons are connected to the neurons in the same position in layer 2.
For every incoming spike at position [x, y], the membrane potential for the neurons in layer 2 rises by a fixed amount given by the synaptic weights from the DVS layer while it is pulled down by velocity neuron and depth neurons. Only when a continuous spatial region has persistent activity ( Figures 3C,D), the potential rise is enough to cause a spike ( Figures 3E,F). This naturally cancels out the noisy cluttered events. The self-velocity and depth for every pixel determine a minimum width of the spatial continuous spiking patch required to trigger spiking in layer-2. Large self-velocity causes more spikes in a patch that need to be removed. Therefore, higher self-velocity requires .
wider patches of continuous activity to cause spiking in layer 2 and vice versa. The synaptic weights have a unit value for all the excitatory synapses. The negative (inhibitory) weights of the velocity and depth neurons depend critically upon the resolution of the event camera and FoV. They are empirically calculated to ensure exact cancelation of ego-motion when there is no prey drone in the environment. Figure 3 shows the network along with the activation and spiking in each layer. The membrane potentials of the neurons are shown in Figure 3B. Stationary objects have sparser events as shown in Figure 3B causing a small potential rise in layer 2. This causes spiking to be sparse in these regions. Thus, a persistent spiking in layer-2 happens in the region corresponding to the prey drone. Thus, layer-2 carries out the filtering activity as denoted in Algorithm 1 in an asynchronous spiking manner. The intersection of surviving activity in both layer 2H and layer 2V corresponds to the region of the prey. Layer-3 carries out an AND operation using excitatory connections making the activity survive only when both the layer have spiked in that region. This ensures that only the pixels surviving after the cancelation of both vertical and horizontal motion survive to contribute to the identification of the prey. This is shown in Figure 3G. Layer 4 calculates the pixel with the highest spiking activity by calculating the histograms shown in Figure 2D. All neurons in a row for layer 3 are connected to vertical position neurons in layer 4 and similar connections are used for the horizontal position. High sustained activity within a column/row drives the horizontal/vertical position neuron to spike. The intersection of the maximum spiking activity detected by vertical and horizontal position neurons is declared as the estimated position of the target (prey drone).
Asynchronous incoming events in layer-1 requires continuous operation of layer-1. However, the actual position of the target need not be updated every microsecond because of the finite mechanical delay in actuating the predator drone. Thus, the layer-3 and layer-4 that infer the presence of the target from the spiking pattern in layer 2 are calculated at a fixed time interval called an epoch which determines the throughput (outputs per second) of the system. The throughput is also called FPS at some points because of its resemblance with the throughput of frame pipeline. At the end of every epoch, layers 2 and 3 are reset back to resting potential. This avoids unnecessary build up of potential from the previous activity from interfering in the future detection in absence of leakiness. It also saves the storage and computation of previous spiking time-stamp for every pixel to calculate the leakage within the neuron for every incoming event. As there is no restriction on frame rate for the DVS, the epoch can be made arbitrarily small increasing the throughput. However, a very small epoch causes a small number of incoming events to infer from with noise leakage causing an accuracy drop. However, the epoch duration is still significantly smaller than the inter-frame time interval of the optical camera giving higher FPS for the SNN pipeline.
The trade-off is explored in detail in Section 3.3. All neurons are restored to the reset potential of "0" after an epoch is over. The SNN proves useful when the prey generates a large number of events compared to the background. This condition naturally exists when the prey is close. The accuracy of the SNN degrades gradually as the prey moves farther. However, for prey at a distance, CNN works reliably as the prey cannot escape the FoV quickly and can be tracked.
. . Prey detection via CNNs
Convolutional neural networks is required to add fault tolerance to the reasonably accurate and fast SNN. Drone detection using CNNs is well-explored Nalamati et al., 2019) with different models and training methods having different accuracy vs. latency characteristics (Aker and Kalkan, 2017;Sun et al., 2020;Singha and Aydin, 2021). The CNN provides a bounding box around the drone. The mid-point of the bounding box is used as the CNN output. This provides an anchor position for the fusion algorithm to determine whether the SNN outputs are usable. However, it is important to note that the final task at hand is target localization for closed-loop chasing application. Therefore, the exact dimensions of the bounding box do not have a stringent restriction as required in the previous works where an accurate object detection task is intended. Additionally, the CNN output provides a reasonable estimate of the region of presence of the target within the FoV for actuating the predator platform. The Euclidean distance between the SNN and CNN outputs from the true mid-point of the target's position is used for calculating the accuracy. We fuse the output of the neuroscience-inspired SNN filter with an established electronic CNN pipeline for boosting the throughput of target localization to track evasive target prey. The accuracy vs. latency trade-off within the CNN caused by different models and detection algorithms affects the final accuracy after fusion. Thus, selection of feature detection backbone and detection method forms a key decision. These trade-offs are explored in the section 3.3 and the choice of network is explained.
Reconstruction of intensity image from the events produced by the DVS followed by conventional CNN based-object detection is possible saving additionally required optical camera in our work (Rebecq et al., 2019). Low-cost reconstruction approaches have been demonstrated in Liu and Delbruck (2022) for optical flow calculation where the binary intensity frame is generated by event accumulation followed by block matching for calculating the local optical flow. Mohan et al. (2022) uses event accumulated binary frames for traffic monitoring for detecting moving cars by a stationary event camera. However, our work requires a frame-based accurate target detection using CNN for maintaining the overall accuracy of the system. Thus, we expect that this application will benefit from reliable intensity Frontiers in Neuroscience frontiersin.org . /fnins. .
information requiring accurate event-to-frame reconstruction. These approaches are typically computationally heavy (Wang et al., 2019), consuming vital circuit resources. We, therefore, take the approach with separate optical and event-based cameras in this work.
. . Target localization -fusing the SNN and CNN outputs
The complementary specialization of event and frame pipelines in capturing the temporal and spatial details make their expertise in accuracy and latency complement each other. The fused system uses either the most recent SNN output or CNN output as the final localized position of the target and uses it to actuate the predator drone for chasing.
When the target has not been "seen" by the CNN, the SNN looks for a suspicious activity with its high speed. The fusion algorithm uses the SNN output as the final localized position of the target if multiple SNN outputs are spatio-temporally consistent with each other. This causes the predator to start chasing the prey drone at the final fused position even before the CNN checks if it is the required target. Thus, the fusion algorithm needs to signal the CNN to confirm whether the activity corresponds to the required target-adding object selectivity for a target. The chasing with SNN detected activity makes sure that the prey does not enter and evade the FoV of the predator before CNN could process it.
Secondly, when the target is in the close vicinity and generates significant activity, the SNN needs to utilize the highspeed output for actuation while the CNN output confirms the prey position it sporadically. When a CNN output is available, the SNN outputs after it use it as an anchor to check their spatiotemporal consistency. Therefore, both SNN and CNN outputs are required to ensure correct chasing-both before and after the presence of the target is confirmed within the FoV. However, one of them is better suited depending upon the distance between the prey and predator as the predator passes through different stages of capturing the prey. These are listed below.
• Case-1 (Finding the prey): The predator rotates around itself to find the prey in the environment around it. Any spurious event activity causes consistent SNN outputs to build suspicion. The CNN also keeps detecting in parallel. If multiple SNN outputs infer the same region (spatiotemporal consistency), then the suspicion level rises beyond a threshold. This indicates the possibility of the prey being present and the predator starts approaching while the CNN is triggered to provide its inference for validation.
• Case-2 (Approaching the prey): A relatively long distance between the predator and prey causes the prey to generate a small number of events in the event camera output. Thus, it is highly likely that this activity gets canceled by the SNN filter. However, the CNN is reliable in this domain because the prey stays in the FoV for a longer time and CNN latency is permissible. This allows the CNN inference to track accurately with a relaxed constraint on latency.
• Case-3 (In the close vicinity of the prey): As the predator approaches the prey, the event activity of the prey increases making the SNN more reliable. Simultaneously, the latency . /fnins. . constraint gets stringent as the prey can evade quickly. Therefore, the fusion mechanism works best in this phase. The noisy SNN inference is compared with the CNN inference for spatial continuity and SNN output from the previous epoch for temporal continuity. A spatiotemporal consistent SNN output is declared as the position of the target.
The error compensating fusion scheme is outlined in Algorithm 2. The predator starts by searching for the prey by rotating around itself till the prey is found by either SNN or CNN. At every epoch of processing, the RGB frame and IMU data is captured while the event stream continuously comes in. The SNN filter operates continuously to identify if the prey enters the frame and generated an output after every epoch. Once the activity is detected, the output has to go through a spatio-temporal consistency check with the recent SNN and CNN outputs. This is carried out by defining a suspicion level. If the position identified by the SNN [position SNN (t)] at time step "t" is close to the most recent CNN detection, then this indicates the spatial continuity with the reliable CNN output and this SNN output is declared as the final fused position (position fused ). However, it might be possible that the CNN has not detected the prey yet (found CNN = 0). In this case, position SNN (t) is also compared with the identification of the SNN at the previous epoch position SNN (t − 1) to check temporal continuity. If the SNN outputs are spatially close within the FoV, the suspicion level rises. This makes sure that the SNN outputs correspond to a genuine external motion in the region. For the suspicion level beyond a threshold, the SNN output is declared as the final fused output.
If the suspicion score rises above the predefined threshold, this also triggers the CNN to confirm that the detection corresponds to the prey. The CNN is also activated after every fixed period of time. The area of the bounding box detected by the CNN is used to estimate the distance between the predator and prey. A larger bounding box corresponds to the prey being in close vicinity. Depending upon the distance between the prey and predator, the relative importance of SNN and CNN are determined. If the prey is close, then most compute resources can be allocated to SNN with sparser CNN validations. Whereas if the prey is far, the CNN is made to operate at maximum throughput by taking compute resources from SNN as required in case-2. Depending upon the position of the prey identified in the FoV, the actuation velocities are selected with the goal of keeping the prey at the center of the frame.
The allocation of computing resources to SNN and CNN by tuning the operating frequency of the CNN dynamically depending upon the distance between the prey and predator assumes the same computing platform being used for the implementation of both SNN and CNN. If the same platform has enough resources to share (e.g., FPGA) for running both pipelines in parallel, then both SNN and CNN can be operated at its maximum throughput and multiple epochs of SNN outputs would be compared with the most recent CNN output for spatial continuity.
. . Verification using virtual environments
The autonomous flights of drones within virtual environments are enabled by PEDRA (Anwar and Raychowdhury, 2020). Programmable Engine for Drone Reinforcement Learning Applications connects virtual environments created in Unreal Engine to airsim (Shah et al., 2018) enabled drones through a module-wise programmable python interface. User-defined environments can be created within Unreal Engine with varied level of complexities as used in typical gaming platforms. Multiple drones can be instantiated with a set of image, depth, and inertial sensors mounted on them using airsim. The drones can be actuated at specific velocities and orientations to interact with the environment. The actuation can be pre-programmed for every time step or can be determined by the CNN inference on the images captured by onboard camera. Images can be captured from the point of view of the drone and processed using Tensorflow for image processing for actuating the drone for the next time step. Programmable Engine for Drone Reinforcement Learning Applications provides a training and evaluation framework for the tasks that otherwise cannot be directly tuned on a flying platforms. We instantiate a prey and a predator drone in multiple virtual environments created for this study. As PEDRA only provides frame-based image sensing, we add experimentally calibrated frame to event conversion using v2e tool . This provides a time-stamp encoded event stream by fine-grained interpolation images and calculation of intensity differences calibrated with real DVS cameras. Thus, both event-based and frame-based visual data is added to existing PEDRA infrastructure. The images and event-stream captured by the predator drone are handed over to the Python backend implementing both SNN and CNN. We program the trajectory of the prey drone while the predator is controlled using the output of the vision backend. We use Intel i9 Processor and NVIDIA Quadro RTX 4000 GPU for the simulation experiments. Both networks provide their outputs as the center point of the detected target that are used in the fusion algorithm to determine the final fused target position. from both pipelines along with the final fused output can be seen in fusion demo-proof of concept . The prey and predator start at a distance with the prey drone being out of the FoV of the predator (Figures 4A,B). This corresponds to the case-1. The SNN outputs in phase catch only the noise and stationary background and do not have spatio-temporal consistency. Therefore, the SNN outputs are incorrect in this part ( Figure 4C). Convolutional neural network operates sparsely and CNN detections also verify that the prey is not present in the FoV. This causes the suspicion score to stay at zero ( Figure 4D). As the predator rotates, the prey appears within the Fov causing SNN to provide outputs that lie in the same region as the previous SNN outputs (case-2). This builds up the suspicion https://youtu.be/wO TO PL U level for the SNN ( Figure 4D-case 2). When the suspicion level exceeds the threshold, CNN is activated validating that the prey is present in the FoV. The suspicion level can be seen to go down quickly in this region for case-2. This is because the distance between the prey and predator is still high the SNN outputs are not very reliable.
. . . Operation of fusion algorithm
As the distance between the predator and prey reduces, the system enters case-3 where rapid accurate outputs are required from the SNN with sparser CNN verification. This is reflected in the high suspicion level in this phase where spatio-temporally consistent outputs from the SNN cause the suspicion level to rise and stay high. Figure 3C also has correct SNN outputs in the region corresponding to case-3. while CNN has reliable detection ( Figure 5B). The fusion algorithm corrects this as the final fused output uses CNN output ( Figure 5C). Figure 5D shows the top view of the trajectories of prey and predator from the demo video denoting the regions of case 1-3 as the predator passes through them.
. . . Study in multiple environments and trajectories
The previous proof of concept is extended to two forest environments with sparse and dense backgrounds. The denser background is expected to create more self-motion caused events which in turn makes the SNN output noisier. The prey drone is programmed to fly with different evasive trajectories that make the prey enter the FoV for a brief period and escape. The high-speed fused (SNN+CNN) vision system is expected to be able to track these evasive trajectories. Both fused and CNN-only (frame pipeline only) systems are compared to establish the superiority of the fused system caused by the higher throughput provided by the SNN. The video demonstration for comparison is available at Multi-environment validation . Interested readers are strongly encouraged to watch the video to understand the interplay between the frame and event pipelines.
The representative final trajectories taken by the prey and predator in two of the trajectories in both environments are https://youtu.be/cQ OGRgmv w plotted in Figure 6. The prey can be seen to have a curvy trajectory as it tries to move out of the predator's view. The distance between the prey and predator as the algorithm progresses is plotted in the bottom sub-plots ( Figures 6I-L). The CNN-only system is unable to keep up with these quick evasions and the prey moves out of the FoV for both sparse and dense environments (Figures 6A-D). This can be seen as the distance between the prey and predator rises for the CNNonly system at least once in the chase. The fused (SNN + CNN) system tracks the prey for a longer duration by keeping it within the FoV (Figures 6E-H). This maintains a small distance between the prey and predator as the predator chases the prey. We also notice a few runs of the fused system not being able to keep up and the prey escapes even with the higher frame rate. These experiments validate the potential of a fused system in having high-speed tracking while maintaining high accuracy.
We observe that the algorithm critically depends on the CNN detection for validating the SNN outputs. The failure cases typically correspond to the runs where the CNN does a mis-detection and they prey escapes. Thus, a reliable CNN is highly desirable. Secondly, the accuracy of SNN is low in the denser environment and causes the suspicion level to rise slower because of the mis-identifications. This sometimes causes the prey to escape. Incorrect CNN detection occurs more frequently in the cluttered denser environment. Therefore, the system is better suited for scenarios with smaller background clutter like outdoor high-altitude applications.
. . . Mitigating the accuracy vs. latency trade-o
We now assess the accuracy vs. latency trade-off in all 3 categories namely-SNN-only, CNN-only, and fused SNN+CNN. The SNN and fused detection provide a single point as output whereas the CNN provides a bounding box. The mid-point of the bounding box is taken as the CNN output. The accuracy for the SNN/CNN/fused results is calculated by checking if the predicted position is within a 50-pixel distance of the manually annotated position. Our accuracy metric checks if the predicted and actual position are within a similar region for actuating the predator drone to keep the prey within the FoV. Our closed-loop chasing uses the visual output at every time step to calculate the actuation velocities such that the prey gets centered within the FoV as the chasing progresses. This does not require exact bounding boxes and coarse localization (Lee et al., 2018;Zhang and Ma, 2021) provided by the single-point outputs is adequate. Other high-precision object detection approaches typically calculate the exact overlap between predicted and manually annotated bounding boxes in the image frame followed by evaluating mean average precision (mAP). However, we use center location error thresholding (50-pixels) instead of mAP as the comparison .
/fnins. . metric for the coarse single object localization task at hand. This center location error thresholding metric has also been used previously to calculate the accuracy of single object tracking (Wu et al., 2013) and chasing . We confirm the working of the system with such coarse detection system in the multi-environment demonstration video provided in the previous subsection. Figure 7 shows the accuracy and latencies obtained for four different trajectories shown in the video and three runs per trajectory for both virtual environments. Each point corresponds to the average accuracy for a trajectory. The latencies of SNN and CNN pipelines are extracted from hardware estimation described in Section 3.4. Convolutional neural networks shows near-perfect accuracy with a longer latency (from section 3.3.2) as shown in Figure 7. Noisy outputs of the SNN-only system causes the prey to evade the predator in the initial time steps and it detects false positives once the prey exits the FoV. This causes SNN to have a very low accuracy. This causes the CNN and SNN pipelines occupy the positions of trade-off as shown in Figure 7 for both environments. The fused system compromises the accuracy slightly while maintaining small latency allowing efficient tracking even for quick evasive trajectories. The fusion algorithm reduces false alarms caused by noisy SNN while preserving the true positive outputs. The fused latency is calculated by dividing the total latency by the number of outputs from both SNN and CNN during entire execution of the operation. Thus, the accuracy vs. latency trade-off can be seen to be mitigated with a fused system with event + frame hybrid processing.
. . Real-world demonstrations . . . Real-drones with emulated event data The system was verified in both indoor and outdoor realworld settings as the next step. The DJI Tello Edu is used as a predator drone. This drone has a frame-based camera streaming the data to a local computer. The computer actuates the drone by processing the data through a wireless link. As the IMU readings are unavailable for these small drone, the actuation velocity of the previous step is used as the self-velocity in the current step for SNNs. Holystone 190S drone is used for prey which is flown manually. Conversion of frames to events takes a long time with the video interpolation strategy used in v2e. This makes the drones drift in the air with the wind and the inference takes a long time. To avoid this issue, we use the difference between the consecutive frames and threshold it to emulate the event accumulated frame. The communication of image and actuation velocities for the predator drone consumes 30 ms. Figure 8 shows the screenshots of the experiments recorded in videos-video-1 . The captured frames and detected drone positions can be seen in the video. Figures 8A,B shows the two steps in following the prey drone flying away while the predator drone autonomously follows it. Figures 8C,D shows the prey drone making a turn to evade the predator which eventually tracks it. This demonstrates the feasibility of the implementation of a closed-loop target tracking system. Although the realistic noise in DVS and IMU is not incorporated in these experiments, the multi-pipeline outputs are fused to generate an accurate inference. Desired chasing action from the predator drone demonstrates the potential of the system in a closed-loop setup.
. . . Hand-held DVS data
The experiments so far emulate the output of the DVS on a frame-like. However, real DVS data with real IMU provides significant noise that the system needs to tolerate. The depth and event camera do not align exactly and the robustness of the system needs to be tested for all these inherent inaccuracy of the real hardware. Therefore, we test the system on a realdata recorded on a hand-held DVS, depth camera, and the corresponding IMU readings. We use DVXplorer and Realsense d435i bound together as the camera assembly and the prey drone is flown manually in front of it in an indoor lab setting. Realsense camera provides IMU reading (62.5 Hz for accelerometer and 200 Hz for gyro-sensor). The self-velocities are calculated for rate-limiting 62.5 Hz and are used for SNN outputs until a new IMU reading is acquired The depth information is acquired at 90 FPS. Spiking neural network uses the previous depth information until a new depth frame is captured by the camera. This results in a slight lag between event and depth information if the operating throughput of SNN is higher than 90 FPS (264 FPS in this case). However, the SNN estimated position can be observed to be reliable with this lag as shown in video-2 The camera assembly is handheld and always points toward the prey drone. The drone escapes the FoV and re-enters. The captured data from DVS and the optical camera is aligned manually with simple linear translation and scaling of the image. The data is processed using the algorithm providing the outputs of CNN, SNN, and fused system. The details are available in this videovideo-2 . A screenshot from the video is shown in Figure 9. The spiking activity of the layers of SNN shows how ego-motion cancelation results in the activity corresponding to the prey to survive. The algorithm can be seen to work even in the highly cluttered indoor setting with reasonable accuracy. The system uses the faster SNN outputs along with the CNN outputs to boost the throughput of the overall system. Even though this system does not close the loop with autonomous actuation, the working of the system with real data predicts that it is capable of running on an aerial platform. The accuracy can be improved further by building event + frame datasets for object tracking using mobile platforms. Training SNN using such datasets may improve the overall accuracy of the system. A future step would involve mounting the assembly on a drone to close the loop from sensing to actuation.
. . Design space exploration
The design parameters like "span, " noise in self-velocity affect the SNN output. In addition to this, the selection of epoch duration determines the SNN latency and throughput and presents an internal accuracy vs. latency trade-off for the SNNs. For very short epoch intervals (for high throughput), inadequate https://youtu.be/aZsX heR gw Frontiers in Neuroscience frontiersin.org . /fnins. .
FIGURE
Screenshots from the processing of the data recorded using the multi-camera assembly. The spiking activity of the intermediate layers of the SNN can be seen to cause self-motion cancelation.
number of events are processed injecting noise. This causes lower accuracy. For lower throughput for SNN, higher accuracy is achievable. On the other hand, the feature detection model and object detection method in the CNN pipeline presents another accuracy vs. latency trade-off within the CNN pipeline. Large CNN models typically have larger accuracy at the cost of slower execution. All these parameters and design variables offer a wide range of parameters to choose from. We explore these design choices in this section. The optimal parameters observed in this section are used in the experiments presented in the previous discussion.
. . . Parameter tuning for event pipeline "Span" and the noise in self-velocity directly affect the spiking pattern in the SNN. The exact self-velocity of the predator is available in the simulation environment whereas it is noisy when acquired as the accumulated accelerometer sensors output in the real IMU data. Therefore, we use simulations in the virtual environment for finding the optimal values for these parameters and their effect on the accuracy of SNN output. We also investigate if the fused SNN + CNN system is capable of improving the accuracy for these empirical parameters. The experiments are carried out for the trajectory shown in Figure 4D.
• Span: In the first experiment, the span of connectivity between layer 1 and layer 2 is swept from 6 to 12 in the steps of 2. A higher span indicates higher injected activation in layer 2 for every incoming event from the DVS. This results in a high chance of spiking in layer 2 and thus a higher probability of finding persistent activity. However, the chance of mistaking a steady object for the target also increases with higher activity injection. Thus, both false positive and true positive outputs rise as the span is increased. Three experiments are carried out for each combination and both sparse and dense trajectories. The results are plotted in Figures 10A,B. The accuracy can be seen to improve from SNN-only identifications to SNN+CNN fusion for most of the data points. We use the span of 10 as it provides higher relative accuracy in both sparse and dense environments.
• Noise in Self-velocity: Accurate reading of self-velocity plays a key role in the self-motion cancelation network. This bio-inspired approach relies on the assumption that the IMU sensors can provide an accurate estimate of the pose and speed. However, the sensors are often noisy in a real-world scenario and it is necessary to test the limits on error tolerance. We add noise in the velocity The noisy simulations affect the accuracy of SNN. Figures 10C,D shows that a high percentage of velocity noise can be tolerated by the algorithm highlighting its robustness. The SNN-only accuracy is lower compared to fused accuracy with CNN validations boosting the accuracy. The degradation in accuracy for SNNs is more for the dense cluttered environment as expected.
The simulations show that both span and the noise in selfvelocity have a weak correlation with the accuracy of the event pipeline. However, the accuracy improves significantly after .
/fnins. . fusion with CNN output as noisy SNN estimates are eliminated. Additional exploration using real DVS data with accurate pose estimation in different environments can be carried out in the future.
• Epoch Duration (SNN Latency): The epoch duration in SNN controls the accuracy and latency of the event pipeline. The events generated within an epoch duration are used to generate an SNN inference. Therefore, the epoch duration controls the SNN throughput and latency. This experiment cannot be reliably carried out in the virtual environment, because v2e ) reports a simulated time stamps. Therefore, the experiment is carried out using the real DVS data from Section 3.2.2. The data is manually labeled for the position of the prey. The latency of an epoch is varied in Figure 11A to find the accuracy of SNN (event pipeline). A smaller duration of epoch results in a higher throughput for SNN. The plot shows that the accuracy monotonically increases for a larger epoch duration. This indicates that smaller epoch duration causes a small number of events to generate an inference from. This results in more noise injection and a reduction in the accuracy. A longer epoch produces large number of events required for a reliable output. High SNN throughput results in more SNN outputs between every consecutive CNN detection. An effect of this on the final fused accuracy is explored next.
The virtual environments used in this case alter the amount of background clutter and show similar trends in the hyperparameters. Therefore, we expect the trends to hold for other scenarios with similar testing setups. However, if the setup changes drastically, e.g., very high-speed chasing in a high-altitude environment the tuning may need to be carried out again.
. . . Model selection for frame pipeline
The CNN needs to detect the prey drone accurately and quickly for accurate fusion. In case of an incorrect detection, the SNN identifications after it rely on it for updating the suspicion level and the subsequent outputs result in accuracy degradation. Therefore, a high accuracy is desirable. Simultaneously, if the CNN is too slow, then multiple SNN outputs get processed within two consecutive CNN outputs inducing inaccuracy in the final fused output. The key requirement for CNN here is the ability to track small drones. This is because the setup is completely dependent upon the CNN when the prey is far away corresponding to case-2. Thus, a reliable, fast, and small object detection capable CNN is required. A previous survey on small object detection dataset (Chen et al., 2016;Pham et al., 2017;Nguyen et al., 2020) shows YOLO and Faster-RCNN have higher accuracy compared to single-shot detectors. The size of the feature detection backbone also plays a key role in the accuracy and latency of CNN. Thus, the design space consists of multiple object detection methods and feature extraction networks to choose from.
First, we train multiple models and find their respective accuracy. We use the data recorded from the hand-held camera assembly that records both event-stream and frames for the flying prey drone simultaneously. The image frames from this dataset are manually labeled. The data consists of 1,200 training images and is validated on a video consisting of 400 frames. Additionally, images from Lin (2020) and Gupta (2020) are added for a diverse training. The pre-trained feature extraction networks trained on the Imagenet dataset are used from Matlab.
The networks are trained and tested to find the accuracy shown in Figure 11B. The accuracy for large feature extraction networks like ResNet50 is higher than the smaller networks as expected. Faster-RCNN detectors have higher accuracy as observed in previous literature (Nguyen et al., 2020;Pham et al., 2017). This is because of the small size of the target prey drone and faster-RCNNs are better suited for small object detection.
In the second step, we calculate the latency of each of the networks on an edge-FPGA of Zync-7000 (explained in Section 3.4). We use ScaleSim (Samajdar et al., 2018) as the architectural simulator for latency characterization. ScaleSim has a systolic CNN array architecture. We characterize it as per Zynq-7000 SoC's resource availability. ScaleSim supports resources as powers of two seamlessly. Therefore, 400 DSPs are planned to be used in 16 × 16 systolic configuration. Similarly, 265 kB BRAM (local memory) is mapped onto 256 kB SRAM cache. The input size and layer sizes for the network are provided as input and the execution latency for a single image is extracted as the output of the network. The latency is plotted across the accuracy values as shown in Figure 11B. Squeezenet for YOLOv3 being small networks have a low inference latency whereas the ResNet50 on Faster-RCNN takes a longer time to infer. This plot also reveals the accuracy vs. latency trade-off within CNNs that motivates this work. It can be seen that even the fastest CNN is unable to provide very high throughput (> 100 FPS) showing the need for the event pipeline.
. . . Parameter selection for fusion algorithm
The accuracy vs. latency trade-off within both SNN and CNN pipelines affects the performance at the fused outputs ( Figure 11C). We run the fusion algorithm on the camera assembly data from Section 3.2.2. The overall accuracy of the fused system is plotted across individual SNN and CNN latencies. The final accuracy after fusion can be seen to be critically dependent upon the CNN model. GoogleNet+FasterRCNN provides the highest final accuracy. This is because this configuration achieves the optimal balance between accuracy and latency. ResNet50+FasterRCNN has very high accuracy but longer latency causes incorrect SNN outputs .
/fnins. . to leak in between consecutive CNN inferences. This degrades the overall fused latency for the ResNet50+FasterRCNN setup. ResNet50+YOLO has worse fused accuracy compared to squeezenet because of its longer inference latency in spite of being slightly more accurate. This study shows that both accuracy and latency on the CNN model are of key importance in the final fused accuracy.
Spiking neural network latency determines the overall throughput of the network and also controls the accuracy of the SNN pipeline as seen in Figure 11A. However, it does not have a critical impact on the overall fused accuracy of the system. This shows that CNN model selection is imperative in determining the fused accuracy of the system whereas SNN latency is important in the final throughput of the system. The previous results use the parameters tuned in this section. This study provides a methodology to evaluate the choice of the best model and SNN parameters corresponding to a processing platform. Our Zynq-7000 FPGA analysis focuses on edge-compute. A larger FPGA can reduce the inference latencies for all CNN architectures and therefore the choice of the best network may differ. An exhaustive analysis of multiple compute platforms, object detection architectures, and backbone networks may be taken up in the future.
. . Throughput estimation
The system requires a low-power (<10 W) edge application at a high speed. It requires support for a highly computeintensive CNN with multi-channel convolution, as well as memory-intensive SNN requiring membrane potential storage and update for a large number of neurons. Thus, the hardware requires parallelization for faster CNN and block-wise memory (Davies et al., 2018) cannot map the CNN effectively. Using individual optimized boards requires additional effort in synchronization of the data and adds latency of communication between the boards. Thus, a programmable FPGA offers the optimal trade-off point in the hardware space with decent support to both pipelines as well as low-power edge applications. Spartan FPGA family lies in the required low power range but has very limited resources. Thus, we use Zynq 7000 FPGA for hardware mapping (BERTEN, 2016). The SNN and fusion pipeline controls the maximum throughput of the network. The micro-architecture of the SNN and fusion system is shown in Figure 12. The input from the event camera, IMU, and depth camera is acquired at the input layer from the IO. The output of the CNN pipeline is assume to be acquired from an internal CNN block running the CNN. Layer 1 requires asynchronous operation as outlined in Section 2.1 while the next layers along with the fusion algorithm operate after every time epoch. Both layer 1H and 1V are to be implemented in a block RAM for quick access to incoming event packets. This makes the SNN design memory intensive for storing 480 × 640 (frame size) activations. The IF neurons add up the event activity and store the spiking information for the next layers to process it. A counter triggers layers 2 and 3 after the duration of an epoch to identify the position. Thus, the minimum epoch duration (maximum SNN throughput) depends upon the latency of execution of layers 2, 3, and fusion algorithm together.
We implement the above architecture using Vitis Highlevel Synthesis on Zynq 7000 SoC (xc7z035-fbg767-1). All SNN layers along with the fusion algorithm are mapped onto the FPGA. The FPGA is operated at a clock period of 12 ns which is the maximum allowed clock frequency provided by the synthesis. Layer-1 takes 65 clock cycles per incoming event including the spike generation. Thus, 780 ns are taken for every incoming event allowing the processing of 1.28 M event/s. Execution of layers 2, 3, and fusion algorithm takes 3.78 ms. Therefore, the minimum epoch duration is 3.78 ms with a maximum throughput of 264 FPS. This confirms that a straightforward implementation on an edge-FPGA is able to provide humongous throughput for the SNN. The resources consumed by the implementation above are 375 BRAM (75%), 1 digital signal processor (DSP) (0.1%), 1,073 flip flops (FF) (0.3%), 1,782 Look-up Tables (LUT) (1%) showing low resource consumption on board. The SNN implementation is memory intensive whereas the CNN implementation is generally DSP intensive with multiple parallel operations. Thus, we expect complementary resource consumption by the event and frame pipelines directly suitable for FPGAs. An end-to-end bandwidth optimized implementation of both pipelines can be taken up in near future.
Drone navigation typically uses companion computers for vision processing that communicate the commands for actuation to the flight controller that in turn drives the motors. Autopilot software-hardware stacks like PX4 use UART communication for receiving the actuation commands. The maximum rate of communication lies in the kHz range. Therefore, our throughput of 264 outputs per second is not redundant from the electronics perspective and further improvement is also desirable. From the mechanical perspective, customized mid-sized drones capable of carrying the weight of the DVS, frame camera, and compute platform are shown in Zhu et al. (2018) and Falanga et al. (2020). These drones are demonstrated to move at ∼ 2 m/s. This corresponds to an SNN output for every sub-centimeter displacement which would be sufficient for tracking problems. High-speed drones are typically lightweight and are unable to support large weights of the cameras and compute assembly. A closed-loop study of altering the sensor and compute weight on customized drones could enable the search for the optimal point for the maximum speed of the drone vs. sensor and compute weight. This can be taken up in the future.
. . Comparison with prior work
We compare our method with previous demonstrations of high-speed target localization (Table 1). YOLOv3 works with a frame camera and performs reasonably fast (Redmon and Farhadi, 2018) but works on a power-intensive GPU. Vibe (Van Droogenbroeck and Barnich, 2014) works with the frame difference between consecutive frames to identify the motion but is eventually limited by the frame rate of the camera. The approaches using event cameras typically show non-selective identifications and tracking. This means that all moving objects are identified without being selective. Falanga et al. (2020) uses optical flow and event time stamp information to segregate the moving object. Other non-selective tracking approaches (Mitrokhin et al., 2018;Zhou et al., 2021;Vasco et al., 2017) use an energy minimizing optimization to find the 3D movement of event clusters and find outliers in them to be classified as moving objects. These non-selective methods are incomplete without an added object distinguishing network. Additionally, the latency of these optimizations is speculated to be typically higher (Mitrokhin et al., 2018) compared to our SNN because of more complex iterations. Convolutional neural networks have also been used with modified objective functions for segmentation of the scene into multiple objects (Stoffregen et al., 2019;Alonso and Murillo, 2019). But the setup becomes computationally expensive because of the convolutional backbone and the speed may be compromised on an edge platform. A fused optical and event-based localization capability is used in Yang (2019) but requires a Tianjic neuromorphic ASIC. Our method shows a high throughput using SNNs and accurate and selective detection of prey drones using CNN. Thus, our method can provide a high-speed implementation on an edge-platform suited for UAV applications.
. . Bio-inspired ego-motion cancelation
A key contribution of this work lies in the design of the ego-motion filter using SNN inspired by neuro-biological advances in recent years. The nullification of self-generated action (reafference) finds ample examples in biology. Male crickets cancel their chirp preventing them to respond to it (Kim et al., 2015). Electric fish cancel the electric field generated by their own actions (Kim et al., 2015). In primates, inputs from the vestibular system are processed in the cerebellum to keep track of the motion (Cullen et al., 2011). Recent progress in neuroscience postulated the presence of differentially weighed neural connections behind this phenomenon (Zhang and Bodznick, 2008). The first neurophysiological evidence for this is found as a distinct class of neurons in the vestibular nucleus of the primate brainstem (Oman and Cullen, 2014). Another model argued that when the estimated response of an ego-action is close to the perceived action, the cancelation happens through adaptive inhibitory circuitry (Benazet et al., 2016). A similar observation was made earlier for humans where "smooth pursuit eye movement" for a target moving in a direction decreases the sensitivity of the vision for the opposite direction (Lindner et al., 2001). The behavioral experiments argue that locomotive insects send a copy of their reafference perceived by the sense to an internal neuron circuitry for cancelation. The key experimental study in the ego-motion cancelation in the vision on drosophila (housefly) is recently published where the neurons corresponding to optical flow around yaw and pitch axis are probed (Kim et al., 2015). This shows that the visual neurons received the motor-related inputs in-flight turns causing the visual inputs to be strongly suppressed. This is very similar to the method we propose where we have the visual response cancelation using the vestibular egomotion using inhibitory synapses (differential cancelation). We showed this neuro-inspired network is capable of detecting the prey with high confidence when it is close to the predator for high-speed response.
. . Neuro-mimetic multi-pathway processing Our system is inspired by the multi-pathway model of the visual processing proposed and found in many animals. Multiple neural paths specialize in specific tasks and combine their inferences. The wavelength insensitive neurons are observed to work for regular vision but UV sensitive neurons work for prey tracking and foreground cancelation for larvae zebrafish . It has been stated that the color-intensive pathway in the brain is slower compared to grayscale but richer in spatial details of the information (Gegenfurtner and Hawken, 1996). Monkeys have visual pathways optimized for global slow and locally fast signals for high-speed tracking (Mazade et al., 2019) (similar to our work). Houseflies also process local and global motion data separately (Gollisch and Meister, 2010). Humans have rods and cones in the retina separating color vision from grayscale activity at the beginning of the processing pipeline. The motion and color-sensitive pathways were suggested to be different in housefly (Yamaguchi et al., 2008). This matches with our design where spatially detailed color information (frame pipeline) and temporally fine event information (event pipeline) are gathered separately and processed in separate pathways before merging into the fusion algorithm. Another feature of our work is that SNN and CNN are suited for different phases of chasing (cases 1-3). This has a parallel where different neuronal clusters are observed to be active in different stages of hunting for zebrafish (Förster et al., 2020). When the predator is at a distance and following the prey, a set of neurons suited for small object detection and tracking are active. However, as the prey is approached and becomes bigger in size different sets of neurons take over the detection task. Therefore, merging and cooperation between the neural paths may have even more interesting insights and applications in the future.
. . Usage of hard-coded networks
Our SNN takes a rigid synaptic weight structure processing the asynchronous incoming event stream for canceling the egomotion. A natural criticism about it can be a lack of training methodology to allow learning. However, many instinctive tasks have been observed in insects which are postulated to .
/fnins. . be shaped by evolution without a learning response (Kanzaki, 1996). Furthermore, the plasticity is high in the initial phase of life and then converges to learnt behaviors after the neural development is near completion (Arcos-Burgos et al., 2019). The argument that most of the animal behavior is encoded in the genome instead of being learned (Zador, 2019) also supports this approach. Hard-coded SNNs have been used with with event-cameras for numerous tasks like stereo depth estimation (Osswald et al., 2017), optical flow computation (Orchard et al., 2013), lane-keeping (Bing et al., 2018), and looming object avoidance (Salt et al., 2017). We believe that the accuracy of our network can be improved with SNNs trained for drone detection. This provides the first-order demonstration of shallow and fast computation of ego-motion cancelation as a step in building bio-inspired SNN robots for highspeed applications.
. . Other related works
Simultaneous use of event and optical camera has been approached in Liu et al. (2016) for predation task in wheeled robots as well. This simultaneous event and frame-based approach uses an event camera to identify the region of interest while CNN does the object recognition on the identified region saving energy consumption and boosting the processing speed. However, the CNN latency for a single frame processing persists. The region of interest identification task becomes challenging with the cluttered background that we utilize in our work, limiting the performance of this system. Another hybrid approach has been used in a fused SNN + CNN approach for optical flow calculation (Lee et al., 2021). The events are accumulated using SNN and are merged into a CNN for more accurate optical flow calculation. However, the CNN backbone remains critical for every inference and the throughput gets eventually limited by the compute. Our approach has the independent frame and event-based pipelines similar to Lele and Raychowdhury (2022) that only provide their respective outputs for the fusion algorithm which works in linear time.
Event camera-based moving object tracking problem has also been addressed using model-based approaches like cluster detection (Delbruck and Lang, 2013), corner detection (Vasco et al., 2016), ICP (Ni et al., 2012), region of interest tracking (Mohan et al., 2022), etc. However, these works operate with either a stationary camera or stationary environment as opposed to independently moving prey and predator in this case. A modification to the region proposal algorithm to identify the independently moving object from velocity estimation can be incorporated to allow tracking using a moving predator platform. Combining these approaches with hybrid processing may open up interesting future directions.
. . Potential limitations
It is worthwhile to speculate on the limitations of the proposed system. The performance assumes both pipelines to be working reliably for interdependent cooperation. Therefore, reasonable lighting conditions would be required for the CNN pipeline although event cameras are known to work in low-light environments. The stability of the drone under windy conditions where the drone drifts creating spurious activity will require accurate IMU sensors for ego-motion cancelation. Vibrations of drone frames can also corrupt the event stream and IMU data. Therefore, a stable flight is desirable for the accurate functioning of the SNN filter. High altitude flight is expected to be easier with sparser occlusions. We observe that the rapid motion of prey drones causes image blur in the frame-based camera corrupting the CNN output. Therefore, a high-quality image acquisition or image stabilization mechanism may be needed in ultra-rapid response implementations. Histogram-based method utilized in SNN filter may get limited if directly applied to simultaneous tracking of multiple objects. Recent works have demonstrated region proposal on low-cost event-accumulated binary images followed by multi-object tracking even in presence of occlusion showing low computation and memory costs (Acharya et al., 2019;Mohan et al., 2022). Customized circuits for this application (Bose and Basu, 2022) demonstrate high throughput and energy efficiency. Such methods can be applied for multiobject tracking in place of layer-4 after canceling the activity caused by the self-motion. Finally, selective tracking of an object from multiple moving targets can be addressed in the future by altering the spatio-temporal filtering algorithm to handle the position from multiple SNN and CNN outputs.
. . Hardware implementation
Numerous interesting possibilities for circuit implementation for such hybrid systems are also possible. We evaluated a hybrid processing method with FPGA. However, the latency of memory access and clocked sequential nature of FPGA limits the performance of SNN. Dedicated asynchronous SNN hardware like Loihi, truenorth (Akopyan et al., 2015;Davies et al., 2018) would overcome the bottleneck allowing massive parallelism with very low power. However, these general-purpose SNN ASICs have a large hardware overhead for the relatively simple network that we propose. Processing the entire flow of the algorithm on a single die with optimized circuits will allow the exploitation of a truly hybrid framework from sensing to implementation at the constrained power budget. Non-volatile crossbar arrays like resistive RAM also show high throughput and low-power CNN processing capability (Chang et al., 2022) that can be augmented with on-chip SNNs. Additional exploration in this direction needs to be taken up in the future.
. Conclusion
We proposed a visual target localization system that leverages the fusion of frame and event-based cameras with corresponding processing neural networks to attain the accuracy and latency advantages simultaneously. The ego-motion canceling SNN and object detecting CNN exploit the temporal and spatial resolution of the respective sensors in two independent pipelines. The SNN filter incorporates the connectivity from the insect brains and multi-pipeline processing and interplay between SNN and CNN has a neurobiological basis in primate and insect brains. The system is shown to work using a virtual environment and real-world demonstrations. The feasibility of implementation on a lowresource FPGA shows a potential throughput of 264 FPS.
This work may open exciting possibilities in building hybrid SNN systems to mitigate the fundamental issues in framebased processing.
Data availability statement
The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.
|
2022-11-27T16:10:59.800Z
|
2022-11-25T00:00:00.000
|
{
"year": 2022,
"sha1": "b56908ee4ee56b7683731705a82c2314c4ae9e5d",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "8f697d562360026e6c867b689ed2fa2dad140dd7",
"s2fieldsofstudy": [
"Engineering",
"Computer Science",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
71409966
|
pes2o/s2orc
|
v3-fos-license
|
Comparison of Morphine and Remifentanil on the Duration of Weaning from Mechanical Ventilation
Department of Surgery, Ajou University School of Medicine, Suwon; Department of Anesthesiology and Pain Medicine, Chonnam National University, Medical School, Gwangju; Division of Pulmonology, Critical Care and Sleep Medicine, Department of Internal Medicine, St. Paul's Hospital, The Catholic University of Korea, Seoul; Department of Pulmonology, Kangreung Asan Medical Center, Kangreung; Department of Anesthesiology and Pain Medicine, Hanyang University school of Medicine, Seoul; Department of Anesthesiology and Pain Medicine, Ajou University School of Medicine, Suwon; University of Ulsan, College of Medicine, Asan Medical Center, Seoul, Korea
Introduction
Most patients with mechanical ventilation (MV) in the intensive care unit (ICU) receive analgesic and/or sedative agents to control pain, relieve agitation and anxiety, and reach a compliance with MV. [1,2] A safe and effective strategy that ensures the patients' comfort while maintaining a light level of sedation is usually associated with improved clinical outcomes.[3] Opioids, such as fentanyl, hydromorphone, morphine, and remifentanil, are the primary agents for managing pain in ICU patients.[4] All opioids have sedative properties to various degrees at high doses.[1] The pharmacodynamic effects of conventional opioids such as fentanyl and morphine can be often prolonged when administered through several days, as a result of re-distribution and accumulation.[5] Especially morphine is regarded as the gold standard of analgesics.It has a late peak time (up to 80% effect at 15 minutes, but peak analgesic effect at 90 minutes) and duration (3-4 hours) when it is applied intravascularly.[6,7] This prolonged effect of morphine can induce a suppression of respiratory drive and potentially delay the weaning from MV. [2] Remifentanil hydrochloride is a potent, selective µ-opioid receptor agonist with a rapid onset time (about 1 minute), and a quick steady-state achievement.[5] Its rapid metabolism by non- specific esterases in the blood and tissue (2 to 3 minutes of halflife) is independent of dose and application time, as well as of the patients' renal and hepatic functions.[5,8,9] Because of this benefit, it is easy to titrate and can be given in relatively high doses for prolonged periods of time without the risk of accumulation.[9,10] Therefore, remifentanil can be regarded as the ideal agent for critically ill patients.
We performed the present study to compare the efficacy and duration of MV and its weaning therefrom, between a remifentanil-based analgesia group and a morphine-based analgesia group in medical and surgical ICUs.We hypothesized that compared with morphine, the use of remifentanil leads to a shortened weaning time from MV.
Definition of terms:
1) Total ventilation time: the time period from the start of MV to the successful weaning end point. of dys-synchrony with the ventilator or agitation even after the bolus injection of sedative agents, application of antipsychotics like haloperidol or seroquel was considered.To decrease the discrepancies between the different hospitals, we made a ventilator setting guideline for the weaning period (Fig. 2).
The primary end-point of this study was weaning time and the secondary end-point were total ventilation time, successful weaning rate, ICU length of stay, and primary and secondary weaning failure rates.Statistical analysis was performed using SPSS 18.0 (SPSS Inc., Chicago, IL, USA).A p value of less than 0.05 was considered statistically significant.
Results
A total 96 patients were recruited in this study (47 with morphine and 49 with remifentanil).The characteristics and baseline clinical assessments of the 96 patients are summarized in Table 1.There was no significant difference between morphine group and remifentanil group with regards to gender, age, height, weight (actual and predicted body weight), underlying diseases, reason of MV, and ICU severity scoring values (acute physiology and chronic health evaluation II and sequential organ failure assessment).Among the underlying diseases, the ratio of chronic obstructive pulmonary disease or asthma of two groups also did not show any differences.Until study enrollment, physicians could select sedative and analgesic agents freely according to their clinical decision.No statistical difference between the two groups existed for the drugs morphine, remifentanil, fentanyl, ketamine, and seroquel regarding the mean duration of application and the total amount (Table 2).The patients of remifentanil group received more amount of midazolam compared to fentanyl group (275.5 mg vs. 198.9 mg, p = 0.025) while lorazepam amount was less in remifentanil group (4.3 mg vs. 5.1 mg, p = 0.031).Atracurium, a muscle relaxant, was used longer in morphine group, however, it was not statistically significant (3.6 days vs. 2.1 days, p = 0.099).
From the start of weaning, the patients of the morphine group received an average total amount of 226.5 mg of morphine as the main analgesic drug over an average period of 4.4 days.The patients of the remifentanil group also received mainly remifentanil as an analgesic drug; mean duration of remifentanil use, 4.1 days and total amount, 29.9 mg, respectively.The use of fentanyl, midazolam, and lorazepam was not different between in both groups (Table 3).However, although the application duration of seroquel was almost the same in both groups, the patients in the morphine group received a larger amount than those in the remifentanil group (331.8 mg vs. 170.0mg, p = 0.013).
The rate of successful weaning at the 7 th day of MV was almost identical in both groups (Table 4).The rate of primary and secondary weaning failures also did not show the statistical differences.The weaning time was longer in morphine group compared to remifentanil group (143.9 hours vs. 89.7 hours, p = 0.069, Fig. 3).There was no difference in view of ICU length of stay, hospital admission period, and mortality rate at the 7 th day in ICU day or the 28 th day in the hospital.
The number of people who could be weaned in less than 7 days was 26 in the morphine group and 27 in the remifentanil group.Comparing only these patients, the weaning time in the morphine group was still longer than in the remifentanil group (68.3 hours vs. 53.6 hours, p = 0.181).Hospital admission period, ICU length of stay, and ventilation duration did not show any difference.
Discussion
Remifentanil is rapidly hydrolyzed by non-specific plasma cholinesterases to nearly inactive metabolites, which results in short-term impact and prevents accumulation.[11] In the present study, we compared the clinical implications of remifentaniland morphine-based analgesia during the weaning period from
A B
to interpret as they do not unambiguously point to one of two drugs as the superior one.However, the tendency of remifentanil to reduce the total ventilation and the weaning time, compared to morphine can clearly be seen.This outcome is in line with previously published data, as Breen et al. [1] reported a significantly reduced duration of MV (∆ = 53.5 hours, p = 0.033) and of the weaning time (∆ = 26.6 hours, p < 0.001) when using remifentanil instead of morphine as a sedative agent.
"Clinical practice guidelines for the management of pain, agitation, and delirium in adult patients in the ICU" mentioned that when titrated to similar pain intensity and end-points, all opioids administered IV exhibit similar analgesic efficacy and are associated with similar clinical outcomes (for example, duration of MV or length of stay in ICU).[3] In contrast, this study shows the tendency of remifentanil to decrease the duration of MV and weaning time compared to morphine, which may be due to stable analgesia of remifentanil because of its rapid onset and offset of action, and its therefore more controlled and stable analgesic behavior.[12] Delayed weaning from the start of weaning trial can have many reasons such as severity of underlying diseases, patient's condition change or duration of MV before weaning trial etc.To exclude the possible bias by delayed weaning, we also analyzed only these who could be weaned in less than 7 days.Remifentanil group of less than 7 days' weaning time also could not reveal the statistical superiority in weaning time decrease compared to morphine group.However, the Kaplan-Meier curve of a weaning time showed more discrepancy in patients with less than 7 days weaning time compared to whole study enrolled patients (Fig. 3).
In remifentanil group, patients showed less need of sedative agents such as midazolam and lorazepam during the weaning time (Table 3).This can be a reflection of hypnotic drug sparing effect of remifentanil.[13,14] Remifentanil group also showed less use of anti-delirium drug, seroquel, during weaning period (p = 0.013).Haloperidol was used only in one case in each group.Benzodiazepine use can be a risk factor for the development of delirium [3] and less use of benzodiazepine during weaning period can be a reason of less delirium incidence and less use of seroquel in remifentanil group.However, until study enrollment, midazolam was used much more in remifentanil group while as lorazepam was used more in morphine group and until enrollment, used day and amount of seroauel was not different between two groups (Table 2).Therefore, we cannot mention that less delirium in remifentanil group is due to less use of benzodiazepine only.
Several studies have reported the deleterious effects of remifentanil such as severe bradycardia, hypotension and asystolic episodes.[15][16][17] In this study, severe bradycardia or hypotension was not encountered in at all.Slow injection of a bolus dose and strict titration of the infusion rate seemed to reduce the incidence of these side effects.[18,19] This study has certain limitations.The first limitation of this study is a small number of enrolled patients and this might be the main reason of non-statistical different result of this study.
We planned total 240 cases enrollment with 120 patients in each group in 6 hospitals.However, the enrollment was not easy in each hospital because of the strict study protocol and we also de-
Fig. 1 .
Fig. 1.Algorithm showing the analgesic or sedative drug use during the weaning period.
randomized, multicenter, open-label, parallel group study was conducted from January 1, 2011 to December 31, 2012, in the surgical and medical ICU of 6 hospitals in South Korea.The institutional review boards of all hospitals approved the study protocol, and informed consents were obtained from the legal representatives of all patients.The patients' criteria for being eligible for this study were age older than 18 years, MV requirement more than 48 hours, and recovery phase from the acute phase of acute respiratory failure.We excluded patients who were expected to need a permanent ventilator treatment and those under continuous renal replacement therapy.All eligible patients (n = 96) were randomly assigned to either morphine (n = 47) or remifentanil (n = 49) treatment.
2 )
Weaning time: (total ventilation time) -(time of ventilation with the controlled mode), however, if the patient was weaned with pressure control, the duration of the controlled mode was as follows; a.If the peak pressure control level was more than 20 cm H2O: time of ventilation spent 16 cmH2O or above b. the peak pressure control level was less than 20 cmH2O : time of ventilation spent at 70% or above of peak pressure control level 3) Weaning trial: start of weaning according to the following criteria; a. Oxygenation: PaO2 > 60 mmHg with FiO2 < 0.4, PaO2/FiO2 > 150, SaO2 > 90%, PEEP < 5 cmH2O, and minute volume < 15 L/min.b.Vital signs: mean arterial pressure > 60 mmHg without vasopressor, heart rate < 140/min, 35°C < body temperature < 38°C, and respiratory rate < 35/min.c.Clinical status: resolution of acute disease phase, no newly developed definite pulmonary infiltration, Ramsay sedation score 2-4, hemoglobin > 7 g/dl, pH > 7.30, electrolyte within normal range, no active bleeding, no increased intracranial pressure, no bronchospasm, no untreated coronary arterial disease, and no specific treatment such as nitric oxide gas, prone position or operation plan etc. 4) Successful weaning: more than 48 hours of spontaneous breathing with a T-piece or in the extubated status.Successful weaning marked the cessation of MV. 5) Primary weaning failure: restart of MV within 48 hours from the weaning end.6) Secondary weaning failure: restart of MV between 48 hours and 7 days from the weaning end.7) Unsuccessful weaning: no successful weaning within 7 days from the study enrollment.Two analgesic drugs, morphine and remifentanil, were used following the protocol from the first weaning trial for 7 days (Fig.1).If the patient failed to be weaned, the clinician could select analgesic or sedative drugs freely after 7 days from the weaning start.The amount of other drugs such as midazolam, lorazepam, haloperidol, or seroquel was recorded daily.In case
data are presented as mean ± standard deviation (median).
Fig. 3 .
Fig. 3. Kaplan-Meier curves for the weaning of all patients (A) and for only those who had a weaning time of less than 7 days (B).
cided to exclude continuous renal replacement therapy cases for getting rid of a bias factor of this comparison study.Second, it is an open-label study and it was difficult to make a double-blinded design for our study.Third, because of diversity of used drug until study enrollment, we could not distinguish the effect of remifentanil only during weaning period clearly.If we designed to use remifentanil from the first intent during the whole MV period, we might get more definite effect of remifentanil.Forth, one of the reasons why we could not get a statistical difference between morphine and remifentanil can be patients' diversity including medical and surgical patients.Many previous studies were usually performed with relatively unique study group character such as post surgical patients.[1,14,20]Further study about remifentanil's weaning efficacy in only medical patients is expected.In conclusion, remifentanil-based analgesia during the weaning phase from MV in critically ill patients tended to reduce the duration needed for weaning from MV compared to morphinebased analgesia.The use of remifentanil during weaning phase resulted in lower benzodiazepine and seroquel requirement.
Table 2 .
Medication until enrollment
Table 3 .
Exposure to a study medication from the weaning start point
Table 4 .
Clinical correlations between the study medications and weaning outcomesData are presented as mean ± standard deviation or %.Primary weaning failure, restart of MV for more than 24 hours within 48 hours after the start of weaning; Secondary weaning failure, restart of MV for more than 24 hours between 48 hours and 7 days after the start of weaning.MV: mechanical ventilation; ICU: intensive care unit.
|
2019-01-02T07:57:56.831Z
|
2014-11-30T00:00:00.000
|
{
"year": 2014,
"sha1": "d96192210f6c57b1e2b74f0e452ef231501c3fa4",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.4266/kjccm.2014.29.4.281",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "5e13c35b64b2905a3259efce67e4e67488bc71ce",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
251951947
|
pes2o/s2orc
|
v3-fos-license
|
A chromosome-level reference genome for the giant pink sea star, Pisaster brevispinus, a species severely impacted by wasting
Abstract Efforts to protect the ecologically and economically significant California Current Ecosystem from global change will greatly benefit from data about patterns of local adaptation and population connectivity. To facilitate that work, we present a reference-quality genome for the giant pink sea star, Pisaster brevispinus, a species of ecological importance along the Pacific west coast of North America that has been heavily impacted by environmental change and disease. We used Pacific Biosciences HiFi long sequencing reads and Dovetail Omni-C proximity reads to generate a highly contiguous genome assembly of 550 Mb in length. The assembly contains 127 scaffolds with a contig N50 of 4.6 Mb and a scaffold N50 of 21.4 Mb; the BUSCO completeness score is 98.70%. The P. brevispinus genome assembly is comparable to the genome of the congener species P. ochraceus in size and completeness. Both Pisaster assemblies are consistent with previously published karyotyping results showing sea star genomes are organized into 22 autosomes. The reference genome for P. brevispinus is an important first step toward the goal of producing a comprehensive, population genomics view of ecological and evolutionary processes along the California coast. This resource will help scientists, managers, and policy makers in their task of understanding and protecting critical coastal regions from the impacts of global change.
Introduction
The California Current Ecosystem (CCE) is a dynamic and complex region of high ecological and economic value (Weber et al. 2021). A key component of protecting the value of the CCE from the negative impacts of global change is a comprehensive understanding of the connections and interactions of the species that exist here. Decades of population biology and ecology research have been conducted in the Pacific Northwest generally (Menge et al. 2019), and in California specifically (Connell 1972;Sagarin et al. 1999;Blanchette et al. 2008;Sanford et al. 2019). In recent years, studies have revealed populations of up to 2 dozen species negatively impacted, in some cases being locally extirpated, by environmental stressors including increasing temperature change, harmful algal blooms, and disease outbreaks (Jurgens et al. 2015;Harvell and Lamb 2020). Species loss and the subsequent breakdown of important interactions are detrimental to CCE function and could ultimately include region-wide ecosystem collapse (Burt et al. 2018;McPherson et al. 2021).
Addressing the intensifying threats to coastal ecosystems requires collaborative, interdisciplinary efforts by all stakeholders. A critical part of this process includes increasing genomic resources, which can be used to address ecological questions and inform conservation decisions, uniting scientists, managers, and policy makers, and complementing decades of foundational field work. Analyzing genomic data for coastal species will reveal how interspecific variation in sequence (e.g. nucleotide polymorphism) and structure (e.g. chromosomal inversions) relate to variation in susceptibility to environmental stress, furthering the goal of preserving natural resources in California and beyond (Formenti et al. 2022;Shaffer et al. 2022).
Sea stars (Echinodermata, Asteroidea) are among the taxa most severely impacted by ongoing environmental change (Montecino-Latorre et al. 2016). Sea stars are significant members of intertidal and subtidal communities with some species acting as keystone species, a concept inspired by the role of the ochre sea star, Pisaster ochraceus, in the Northeast Pacific (Paine 1966;Schultz et al. 2016). Of the 20 or more species impacted by the geographically and phylogenetically broad sea star wasting outbreak in 2013, Pisaster brevispinus, a congener of P. ochraceus, was one of the most severely impacted, with widespread wasting and precipitous population declines (Montecino-Latorre et al. 2016). Recent research has shown that losing sea stars from coastal ecosystems has cascading detrimental effects (Burt et al. 2018), further supporting their importance to nearshore communities, and motivating efforts to conserve the biodiversity that remains.
Here, we present the reference genome assembly for the giant pink sea star, P. brevispinus (Forcipulatida, Asteriidae) (Stimpson), a large-bodied, fast-moving sea star with 5 rays (i.e. arms) found in the low intertidal zone, but more commonly in the neritic zone on soft substrates from circa Ensenada, Baja California, Mexico, to Sitka, Alaska, United States ( Fig. 1, Morris et al. 1980;Costello et al. 2013;Beas-Luna et al. 2020). They are gonochoristic broadcast spawners (Morris et al. 1980) with an estimated larval duration of 76 to 266 d (Strathmann 1987). P. brevispinus is an exceptional predator: it can extend tube feet on the oral disc into the sediment as far as the length of the sea star's radius (up to ~16 cm), pulling clams and other prey to the surface for consumption (Morris et al. 1980). The reference genome produced here will contribute to our understanding of ecological and evolutionary patterns through comparisons with other sea stars and has the potential to reveal hotspots of genetic diversity, connectivity, and species associations that shape population dynamics and ecosystems along the California coast (Shaffer et al. 2022).
Biological materials
An adult P. brevispinus, 118 mm radius (arm tip to disc center), was collected from a sandstone platform at 11 to 13 m depth at Terrace Point, Santa Cruz, CA, United States (36.94487, −122.06429) on 13 October 2020 by Shannon Myers. The voucher specimen (M0D059179O) is archived in the Dawson Lab at the University of California, Merced, United States.
Nucleic acid extraction, library preparation, and sequencing
We extracted high molecular weight (HMW) genomic DNA (gDNA) from 28 mg of tube foot tissue using Nanobind Tissue Big DNA kit (Pacific BioSciences-PacBio) following the manufacturer's instructions with the following minor modification: we centrifuged tissue homogenate at 18,000 × g (instead of recommended 1,500 × g) during the second wash because faster speeds were required to remove the excess wash buffer retained in the tube foot tissue during homogenization. We assessed DNA purity using absorbance ratios (260/280 = 1.87 and 260/230 = 2.47) on a NanoDrop ND-1000 spectrophotometer. We quantified DNA yield (210 ng/µl; 20 µg total) using the Quantus Fluorometer QuantiFluor ONE dsDNA Dye assay.
We constructed the PacBio HiFi library using the SMRTbell Express Template Prep Kit v2.0 according to the manufacturer's instructions. We sheared 13.1 µg of HMW gDNA to an average size distribution of ~18 kb using Diagenode's Megaruptor 3 system. We quantified the sheared DNA using the Quantus Fluorometer and checked the size distribution using the Agilent Femto Pulse. We concentrated the sheared gDNA using 0.45× of AMPure PB beads followed by quantification using a Quantus Fluorometer. We used 6 µg of sheared, concentrated DNA as input for the removal of single-strand overhangs at 37 °C for 15 min, followed by further enzymatic steps of DNA damage repair at 37 °C for 30 min, end repair and A-tailing at 20 °C for 10 min and 65 °C for 30 min, ligation of overhang adapter v3 at 20 °C for 1 h and 65 °C for 10 min to inactivate the ligase, and nuclease treatment of SMRTbell library at 37 °C for 1 h to remove damaged or nonintact SMRTbell templates. We purified and concentrated the SMRTbell library with 0.8× AMPure PB beads for size selection using the BluePippin system. We purified the input of 3.2 µg purified SMRTbell library to load into the Blue Pippin 0.75% Agarose Cassette using cassette definition 0.75% DF Marer S1 3 to 10 kb Improved Recovery for the run protocol. We collected fragments greater than 7 kb from the cassette elution well and purified and concentrated the size-selected SMRTbell library with 0.8× AMPure beads.
We performed proximity ligation using the Dovetail Omni-C Kit according to the manufacturer's protocol with slight modifications. First, we thoroughly ground the specimen tissue with a mortar and pestle in liquid nitrogen. Subsequently, chromatin was fixed in place in the nucleus. We passed the suspended chromatin solution through 100 and 40 µm cell strainers to remove large debris. We digested fixed chromatin under various conditions of DNase I until a suitable fragment length distribution of DNA molecules was obtained. We repaired and ligated the chromatin ends to a biotinylated bridge adapter followed by proximity ligation of adapter containing ends. After proximity ligation, crosslinks were reversed, and the DNA purified from proteins. We treated the purified DNA to remove biotin that was not internal to ligated fragments. We generated a library with an Illumina compatible y-adaptor using the NEB Ultra II DNA Library Prep kit and captured biotin-containing fragments using streptavidin beads. We split the postcapture product into 2 replicates prior to PCR enrichment to preserve library complexity with each replicate receiving unique dual indices. The 20.5 kb average HiFi SMRTbell library was sequenced using one 8M SMRT Cell, Sequel II sequencing chemistry 2.0, and 30-h movies on a PacBio Sequel II sequencer. The Omni-C library was sequenced on an Illumina NovaSeq platform to generate approximately 100 million 2 × 150 bp read pairs per gigabase of genome size.
Nuclear and mitochondrial genome assemblies
We assembled the genome of the giant pink sea star following the California Conservation Genomics Project (CCGP) assembly protocol Version 4.0, introduced in (Lin et al. 2022). The difference between versions relies on the output sequences from HiFiasm [Version 0.16.1-r375] (Cheng et al. 2021) that are used to generate the final assembly (see Table 1 for assembly pipeline and relevant software). The final output corresponds to a dual or partially phased diploid assembly (http://lh3.github.io/2021/10/10/introducing-dual-assembly).
We initially removed remnant adapter sequences from the PacBio HiFi dataset using HiFiAdapterFilt [Version 1.0] (Sim et al. 2022) and generated the initial diploid assembly with the filtered PacBio and the Omni-C data using HiFiasm. We tagged output haplotype 1 as the primary assembly, and output haplotype 2 as the alternate assembly. Next, we identified sequences corresponding to haplotypic duplications on the primary assembly with purge_dups [Version 1.2.6] (Guan et al. 2020) and transferred them to the alternate assembly. We scaffolded both assemblies using the Omni-C data with SALSA [Version 2.2] (Ghurye et al. 2017(Ghurye et al. , 2019. Both assemblies were manually curated by generating and analyzing Omni-C contact maps and breaking the assemblies when major misassemblies were found. No further joins were made after this step. To generate the contact maps, we aligned the Omni-C data against the corresponding reference with bwa mem [Version 0.7.17-r1188, options -5SP] (Li 2013), identified ligation junctions, and generated Omni-C pairs using pairtools [Version 0.3.0] (Goloborodko et al. 2018). We generated a multiresolution Omni-C matrix with cooler [Version 0.8.10] (Abdennur and Mirny 2020) and balanced it with hicExplorer [Version 3.6] (Ramírez et al. 2018). We used HiGlass [Version 2.1.11] (Kerpedjiev et al. 2018) and the PretextSuite (https://github.com/wtsi-hpag/PretextView; https://github.com/wtsi-hpag/PretextMap; https://github. com/wtsi-hpag/PretextSnapshot) to visualize the contact maps.
Using the PacBio HiFi reads and YAGCloser [commit 20e2769] (https://github.com/merlyescalona/yagcloser), we closed some of the remaining gaps generated during scaffolding. We then checked for contamination using the BlobToolKit Framework [Version 2.3.3] (Challis et al. 2020). Finally, we trimmed remnants of sequence adaptors and mitochondrial contamination.
We assembled the mitochondrial genome of P. brevispinus from the PacBio HiFi reads using the reference-guided pipeline MitoHiFi (https://github.com/marcelauliano/MitoHiFi) (Allio et al. 2020). The mitochondrial sequence of P. ochraceus (NC_042741.1) was used as the starting reference sequence. After completion of the nuclear genome, we searched for matches of the resulting mitochondrial assembly sequence in the nuclear genome assembly using BLAST+ [Version 2.10] (Camacho et al. 2009) and filtered out contigs and scaffolds from the nuclear genome with a percentage of sequence identity >99% and size smaller than the mitochondrial assembly sequence.
Nuclear genome size estimation and quality assessment
We generated k-mer counts (k = 21) from the PacBio HiFi reads using meryl [Version 1] (https://github.com/marbl/ meryl). The generated k-mer database was then used in GenomeScope 2.0 [Version 2.0] (Ranallo-Benavidez et al. 2020) to estimate genome features including genome size, heterozygosity, and repeat content. To obtain general contiguity metrics, we ran QUAST [Version 5.0.2] (Gurevich et al. 2013). To evaluate genome quality and completeness we used BUSCO [Version 5.0.0] (Simão et al. 2015;Seppey et al. 2019) with the metazoan ortholog database (metazoa_ odb10) which contains 954 genes. Assessment of base level accuracy (QV) and k-mer completeness was performed using the previously generated meryl database and merqury (Rhie et al. 2020). We further estimated genome assembly accuracy via BUSCO gene set frameshift analysis using the pipeline described in Korlach et al. (2017).
We performed a k-means clustering on the lengths of the top 50 P. brevispinus scaffolds in R (R Core Team 2022) to test if a drop off in scaffold size corresponded to the number of chromosomes predicted for sea stars (Saotome and Komatsu 2002). The expectation for this test is that longer scaffolds, which represent putative chromosomes, will cluster in a group while shorter scaffolds that were not placed into chromosomes will cluster in a second group based on a measurable change in size between the last putative chromosome scaffold and the first nonchromosome scaffold. The number of long scaffolds in the first cluster therefore gives an estimate of chromosome number in P. brevispinus.
Comparison to P. ochraceus genome assembly
We compared the P. brevispinus genome assembly produced here to the chromosome-level genome sequence previously published for its congener P. ochraceus (Ruiz-Ramos et al. 2020). We generated completeness metrics for the P. ochraceus assembly (ASM1099431v1, GCA_010994315.1) in BUSCO [Version 5.0.0] using the metazoan ortholog database. To determine how the P. brevispinus scaffolds correspond to the 22 chromosomes identified in the P. ochraceus genome, we aligned the P. brevispinus genome assembly to the P. ochraceus chromosomes using the program NUCMER in the MUMmer package [Version 4.0.0] (Marçais et al. 2018) and visualized the alignments using the program Dot (github. com/marianattestad/dot).
Nucleic acid extraction, library prep, and sequencing
We estimated the integrity of the HMW DNA using the Femto Pulse system and found 96.6% of the DNA fragments were at least 125 kb. The sequencing runs generated 1.1 million PacBio HiFi reads, which yielded ~37-fold coverage (N50 read length 16,677 bp; minimum read length 61 bp; mean read length 16,615 bp; maximum read length of 51,509 bp) based on the GenomeScope 2.0 genome size estimation of 497.5 Mb. Based on the PacBio HiFi reads, we estimated a 0.00238% sequencing error rate and 1.2% nucleotide heterozygosity rate. The k-mer spectrum output based on the PacBio HiFi reads shows a bimodal distribution with 2 major peaks, at ~19-and ~38-fold coverage, where peaks correspond to homozygous and heterozygous states, respectively, of a diploid species.
Nuclear and mitochondrial genome assemblies
We generated a de novo nuclear genome assembly for P. brevispinus (eaPisBrev1) using PacBio HiFi and Omni-C reads. Complete assembly statistics are reported in Table 2 and Fig. 2B. The Omni-C contact maps suggest that both the primary assembly and alternate assemblies are highly contiguous (Fig. 2C, Supplementary Fig. S1). The assembled final mitochondrial genome size was 16,223 bp. The base composition of the final assembly version is A = 33.09%, C = 22.17%, G = 12.91%, T = 31.83%, and consists of 22 unique transfer RNAs and 13 protein coding genes.
Nuclear genome size estimation and quality assessment
Full genome statistics are available in brevispinus scaffolds into a group and the remaining scaffolds into a second group (Fig. 2D).
Comparison to P. ochraceus genome assembly
The P. brevispinus genome assembly is ~104 Mb larger than P. ochraceus (505.3 vs. 401.9 Mb, respectively) and is contained in fewer scaffolds (127 vs. 1,844, respectively). The scaffold N50 values are similar (21.4 and 21.9 Mb for P. brevispinus and P. ochraceus, respectively) as was GC content (39.5% and 39.0%, respectively). The P. brevispinus Fig. 2. Visual overview of genome assembly metrics. (A) K-mer spectra output generated from PacBio HiFi data without adapters using GenomeScope 2.0. The bimodal pattern observed corresponds to a diploid genome. K-mers covered at lower coverage and lower frequency correspond to differences between haplotypes, whereas the higher coverage and frequency k-mers correspond to the similarities between haplotypes. (B) BlobToolKit Snail plot showing a graphical representation of the quality metrics presented in Table 2 for the Pisaster brevispinus primary assembly (eaPisBrev1). The plot circle represents the full size of the assembly. From the inside-out, the central plot covers length-related metrics. The red line represents the size of the longest scaffold; all other scaffolds are arranged in size-order moving clockwise around the plot and drawn in gray starting from the outside of the central plot. Dark and light orange arcs show the scaffold N50 and scaffold N90 values. The central light gray spiral shows the cumulative scaffold count with a white line at each order of magnitude. White regions in this area reflect the proportion of Ns in the assembly. The dark versus light blue area around it shows mean, maximum and minimum GC versus AT content at 0.1% intervals. (C) Omni-C contact maps for the primary genome assembly generated with PretextSnapshot. Omni-C contact maps translate proximity of genomic regions in 3D space to contiguous linear organization. Each cell in the contact map corresponds to sequencing data supporting the linkage (or join) between 2 such regions. Scaffolds are separated by black lines and higher density corresponds to higher levels of fragmentation. (D) Histogram of the 50 largest P. brevispinus scaffolds. Gray dashed line represents the break point for 2 clusters delimited by k-means clustering of scaffold lengths.
assembly has higher BUSCO scores for complete single copy, complete + partial single copy, fragmented, and missing genes, but the P. ochraceus genome is superior in number of duplicated genes and the proportion of the genome sequence contained in the largest scaffolds (84% vs. 99%, respectively). Whole genome alignment showed that the longest P. brevispinus scaffolds generally correspond (blue dots) to the 22 chromosomes predicted for P. ochraceus, with one exception-P. brevispinus scaffolds 6 and 21 have nonoverlapping alignments to P. ochraceus chromosome 1, indicating these scaffolds should be joined. Areas of sequence inversion (green dots) and alignment gaps (no dots) were present across the alignment (Fig. 3).
Discussion
To generate resources that will inform conservation and management decisions along the California coast and beyond (Shaffer et al. 2022), we generated a genome assembly for P. brevispinus, an ecologically important sea star species. The assembly process we used here (Table 1) aims to generate haplotype-resolved, phased genome assemblies that theoretically correspond to the maternal and paternal chromosomes (Cheng et al. 2021). Ideally, the 2 assemblies are similar in size and contiguity, however, variation between assemblies does occur, as we see here for P. brevispinus (Table 2). Both the primary and alternate versions of the P. brevispinus genome assembly are available on NCBI (Table 2); in the remainder of the discussion, we focus on the more contiguous primary assembly. Sea star genomes, according to previous karyotyping experiments surveying a range of asteroid species, are organized into 22 chromosomes (Saotome and Komatsu 2002). Recent de novo reference genome assembly of P. ochraceus, 1 of 2 possible sister taxa to P. brevispinus (Mah and Foltz 2011), likewise yielded 22 major scaffolds (Ruiz-Ramos et al. 2020). Our reference genome for P. brevispinus is therefore notable in providing a similar yet different estimate, with 23 major scaffolds. Comparison between P. brevispinus and P. ochraceus shows the source of this difference is that P. Fig. 3. Whole genome alignment between the predicted Pisaster ochraceus 22 chromosomes (x axis) and top 23 longest Pisaster brevispinus primary assembly scaffolds (y axis). Blue dots represent areas of sequence alignment in the same direction and green dots represent areas of inverted sequence alignment in P. brevispinus (the query) relative to the P. ochraceus sequence (the reference). Light gray lines indicate chromosome and scaffold boundaries. The total axes are scaled by sequence length contained in top 23 P. brevispinus scaffolds (437.4 Mb) and P. ochraceus chromosomes (398.1 Mb). Each scaffold-to-chromosome alignment block is scaled by the length of the P. ochraceus chromosome (x axis) and the P. brevispinus scaffold (y axis). brevispinus scaffolds 6 and 21 align to chromosome 1 of P. ochraceus (Fig. 3). We conclude, therefore, that the genome sequences for P. ochraceus (Ruiz-Ramos et al. 2020) and P. brevispinus support the findings of Saotome and Komatsu (2002)-that sea stars have 22 chromosomes (autosomes)although more data are needed to confirm whether this result is broadly consistent across Asteroidea, or if there is variation in chromosome number in sea stars. Whether asteroids possess a pair of heterotypic, potential sex, chromosomes (Saotome and Komatsu 2002) also remains an open question.
From a DNA sequencing perspective, ideally, the entirety of a genome assembly should be contained within the number of scaffolds equal to the number of actual chromosomes. Moreover, the assembly should represent the complete genome without gaps or artificial duplication. Comparison of assemblies for the 2 congeneric sea stars P. ochraceus (Ruiz-Ramos et al. 2020) and P. brevispinus (Table 3) provides insight into genome quality beyond that provided by singlegenome descriptive statistics, into challenges that remain, and into solutions being offered by recent technological advances. For example, although a higher percentage of genome sequence is contained within the 22 putative sea star chromosomes of P. ochraceus (Ruiz-Ramos et al. 2020) than for P. brevispinus, the P. ochraceus chromosome sequences include between ~10% and 20% Ns (Ruiz-Ramos et al. 2020). The 2 assemblies also differ in a variety of other aspects including contiguity and completeness (Table 3). Given the congeneric relationship of the taxa, and that the Pisaster genomes were generated under similar strategies (i.e. genomic contigs scaffolded with proximity data), differences in the sequencing technologies and assembly algorithms likely explain much of the variation in the assembly statistics. The P. ochraceus genome was assembled from Illumina short reads (2 × 150 bp) and scaffolded with Hi-C proximity reads (Ruiz-Ramos et al. 2020) while the P. brevispinus genome was sequenced with PacBio HiFi long reads (mean ~16 kb and max ~51 kb in this study) and scaffolded with Omni-C proximity reads. These differences manifest as lower numbers of fragmented and missing genes, and higher contiguity in P. brevispinus compared with P. ochraceus because HiFi reads better facilitate assembly through repetitive and low complexity regions relative to short reads. Contiguity in the P. brevispinus assembly is also likely increased by the move from scaffolding with Hi-C (which is restriction enzyme specific) to Omni-C (which is restriction enzyme agnostic), which improves resolution of topological interactions in looping and low restriction enzyme regions of the genome. The P. brevispinus assembly has higher gene duplication levels than P. ochraceus. Older PacBio chemistries had elevated error rates that could lead to artificially increased duplication (Guan et al. 2020), but P. brevispinus was generated with HiFi reads, which have accuracy rates similar to those of Illumina short reads (>99.9%, https://dovetailgenomics.com/wp-content/uploads/2019/08/ Omni-C_TechNote.pdf), which is expected to reduce assembly artifacts. Ultimately, improvements in comparative genomics will require advances to both sequence generation methods (e.g. increasingly long reads) and assembly algorithms (e.g. assembly through repetitive regions and reduced assembly artifacts). Increased taxonomic coverage is also vital for placing the variation we see (e.g. genome size, duplication levels) into a phylogenetic perspective and testing whether these differences represent evolutionary changes between species or technical variation in methods.
The P. brevispinus genome generated here is a powerful tool for investigating a range of basic and applied questions central to the CCE. For example, comparative genomics of coastal invertebrates has the potential to further our understanding of local adaptation, connectivity, differentiation, and how these will influence responses to global change. Forthcoming research focused on multiple sea star species will help determine whether areas of structural variation (e.g. sequence inversions, indels, etc.) observed between P. brevispinus and P. ochraceus (Fig. 3) represent assembly artifacts or evolved differences between species, the latter of which have been shown to lead to reproductive isolation and speciation in other marine invertebrate groups (Satou et al. 2021). Comparison of gene family duplication and loss can explain the evolution of complex traits (Davidson et al. 2020;Kenny et al. 2020) and offers a useful strategy for testing genomic drivers of morphological and life history variation across sea stars.
Given the recent rise of mass mortalities, including in P. brevispinus and many other sea stars, increasing the number of genome-enabled species will improve the comparative power to test the genetic contribution of a species' susceptibility or tolerance to environmental change and/or disease. For example, previous studies have identified genetic loci responding to selective pressure from sea star wasting in P. ochraceus (Schiebelhut et al. 2018;Ruiz-Ramos et al. 2020) and expression differences in loci associated with immune and nervous system function in Pycnopodia helianthoides (Fuess et al. 2015). The availability of reference-quality assemblies will allow us to map such loci to the genome, assign them functional annotations, and compare their sequence and structure, thus permitting important multispecies comparisons and possibly a new perspective on the health of the CCE.
Multispecies comparisons also will enrich conservation efforts. Genomic data make it possible to better identify biodiversity hotspots and evolutionarily significant units that might require special management (Supple and Shapiro 2018) and can inform captive breeding programs (Hodin et al. 2021) and efforts to reduce inbreeding depression in depleted populations through assisted gene flow or reintroduction (Frankham 2015;Whiteley et al. 2015). In line with the goals of the CCGP, we will use the genome as a reference to understand patterns of population genomic structure and demographic change in P. brevispinus along the California coast. These data, combined with those for a range of other marine invertebrate taxa also being generated by the CCGP, will provide a comprehensive "community genomics" view of the coast and inform conservation strategies for marine habitats in the CCE.
Supplementary material
Supplementary material is available at Journal of Heredity online. Fig. S1. Omni-C contact map for the alternate genome assembly generated with PretextSnapshot. Omni-C contact maps translate proximity of genomic regions in 3D space to contiguous linear organization. Each cell in the contact map corresponds to sequencing data supporting the linkage (or join) between 2 such regions. Scaffolds are separated by black lines and higher density corresponds to higher levels of fragmentation.
Funding
This study is a contribution of the Marine Networks Consortium
|
2022-09-01T06:17:54.928Z
|
2022-08-31T00:00:00.000
|
{
"year": 2022,
"sha1": "2b611f27341e81c4d6b04dd467de7e827c13eb24",
"oa_license": "CCBYNC",
"oa_url": "https://academic.oup.com/jhered/advance-article-pdf/doi/10.1093/jhered/esac044/45632760/esac044.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "413165ee4ded7494991785e9e94e49d3e6b3b31d",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
221836321
|
pes2o/s2orc
|
v3-fos-license
|
Using Neural Architecture Search for Improving Software Flaw Detection in Multimodal Deep Learning Models
Software flaw detection using multimodal deep learning models has been demonstrated as a very competitive approach on benchmark problems. In this work, we demonstrate that even better performance can be achieved using neural architecture search (NAS) combined with multimodal learning models. We adapt a NAS framework aimed at investigating image classification to the problem of software flaw detection and demonstrate improved results on the Juliet Test Suite, a popular benchmarking data set for measuring performance of machine learning models in this problem domain.
Introduction
Most current approaches for software flaw detection rely on analysis of a single representation of a software program (e.g., source code or program binary compiled in a specific way for a specific hardware architecture). Recent work using multiple software representations and multimodal deep learning illustrates the benefits of leveraging both source and binary information in detecting flaws [5]. However, when using deep learning models, determining the most effective neural network architecture can be a challenge. Neural architecture search (NAS) is one way to perform an automated search across many different neural network architectures to find improved model architectures over manually-designed ones. In this work, we use a gradient-based NAS method that leverages a differentiable architecture sampler (GDAS) [2], which was identified as the best NAS method across 10 popular approaches when applied to image classification problems [3].
The remainder of this report is organized as follows. In Section 2, we provide an overview of the multimodal deep learning and NAS methods used to create flaw detection models.
In Section 3, we define the set of experiments conducted to assess performance of these models over the baseline of not using NAS. In Section 4, we present the results of these experiments on a standard benchmark data set used in flaw detection research. And, finally, in Section 5, we summarize our conclusions and provide suggestions for future work in this area.
Methods
In this section, we describe the Joint Autoencoder (JAE) multimodal deep learning model for software flaw detection [5] and the cell-based neural architecture search (NAS) approach used to determine an optimal architecture for that model.
Multimodal Deep Learning for Software Flaw Detection
The neural network architecture selected for these experiments is an early fusion multimodal learning model called Joint Autoencoder (JAE) [4]. JAE was originally developed for learning multiple tasks simultaneously based on sharing features that are common to all tasks. Figure 1(a) illustrates the architecture of the original JAE model, which contains 2 encoder/decoder components per modality and a single mixing component that combines the output from one of the encoders associated with each modality. The components that do not interact with the mixing component are referred to as private branches [4]. Note that each of the components depicted in the image (i.e., each box in the image) can contain one or more traditional neural network layers. Recently, an adaptation of the JAE model, referred to here as the JAE Classifier Model, was developed for classifying software functions as to whether or not they contain flaws/bugs [5]. Figure 1(b) illustrates the architecture of the JAE Classifier Model, where we remove the decoders and use a linear layer to concatenate the outputs from previous layers. In the JAE Classifier Model, we use one or more linear layers with LeakyReLU activation for encoders and the mixing components. In the first linear layer, the number of input features will be the total length of two private branch encoders plus the number of output features from mixing component, and the number of output features is fixed as 50. In the final linear layer, a classifier layer is used, mapping 50 input features to the number of classes. In the flaw detection models used here, we use two classes, flawed and not flawed.
Neural Architecture Search
The JAE architectures described in the previous section were designed manually and thus may not be optimal for the learning tasks to which they are applied. To address this potential issue, we leverage a Neural Architecture Search (NAS) strategy to determine an optimal architecture for the flaw detection task. The specific form of NAS we employ here is based on cell-based search, in which a cell represents a portion of the architecture and is defined using a densely-connected directed acyclic graph (DAG) [3]. The edges of the DAG represent architecture layers and the nodes represent sums of the feature maps output from each of those layers. The search is performed over a set of operations (i.e., network layers) and the weights associated with those operations. Optimization of the cell structure and weights is performed within each iteration of the overall model training.
In this work, we define the macro skeleton, i.e., the full NAS architecture, as the JAE Classifier Model and the cell as the mixing layer with that model. Figure 2 illustrates the macro skeleton architecture (left), example DAG instances of the cell (center), and the cell operations used in our work (right). As noted in the image, the cell operations consist of single linear layers of sizes 25, 50, and 100 (i.e., the number of nodes in the layer). Details of the interpretation of the cell examples as sums of the feature maps of the operations can be found in [2].
We adapt the Automated Deep Learning (AutoDL) NAS comparison framework 1. , which implements the NAS-BENCH-201 [3] image classification benchmark, for use with our flaw detection classification problem. As recommended in the NAS-BENCH-201 experiments on images and confirmed in preliminary experiments with the JAE Classifier Model, we use the GDAS search strategy [2] in the work presented here. GDAS is a gradient-based search method using differentiable architecture sampler to optimize the cell search, and it has been demonstrated to be one of the more efficient NAS techniques that relies on more than simple random sampling for the cell search.
Optimization of the weights in the cell layers is performed using stochastic gradient descent (SGD) [8] and the overall macro skeleton architecture model fitting is performed using the ADAM optimizer [6], both as implemented in the AutoDL framework.
Experiments
In this section, we describe the experiments we performed to answer the following questions: • Are there differences between handcrafted JAE structure and selected structure from NAS?
• Are there improvements on flaw detection performance after implementing NAS?
Data
As we are measuring potential improvements when using NAS on the JAE Classifier Model, we use the same subset of the Juliet Test Suite data [7] from the software flaw detection experiments performed in [5]. The Juliet Test Suite [7] is a collection of test cases in the C/C++ language, providing pairs of functions with and without software flaws. The test cases laws are organized into collections based on the Common Weakness Enumeration (CWEs) of the specific flaws exhibited in each function. Table 1 lists the test case CWE collections used in this work. This set of test cases represents a wide range of the types of flaws found in real-world software systems. We use the features extracted from this data as defined in [5].
In our experiments, we split each CWE collection into three data sets: 80% train, 10% validation, and 10% test. For cell search, we use the train and validation data sets to search for the best cell.
Methods used in Experiments
We compare flaw detection results using the JAE Classifier Model and application of the GDAS to the cell-based macro skeleton described in the previous section. The manuallydesigned JAE Classifier Model used a mixing component with a single linear layer consisting of 50 nodes, and we refer to this model as the JAE-Mixing-50 model. In our experiments, we also investigated the use of a larger layer of size 100, and we refer to that model here as the JAE-Mixing-100 model. The GDAS-based model is referred to here as the NAS-GDAS-JAE model.
Measurements used in Comparing Methods
For each of the Juliet Test Suite CWE collections, we performed N × 2 cross validation [1] with N = 5. We use this form of cross validation as it provide a pessimistic estimate of the generalization error; when training models for operational use, we often use more than 50% of our training data to fit the final model. We use class-averaged accuracy-the average of the accuracies of instances from each class, normalized by the size of each class-to adjust for the skew in the sizes of the flawed and not flawed instance (see Table 1 for details). This approach addresses skew by not favoring classification results from either of the classes when they are not equal in size. For each method, we compute and report the sample mean and sample standard deviation of the class-averaged accuracy results for each method on each CWE collection.
Cell Structure Optimization
As mentioned earlier, in the NAS-GDAS-JAE model, cell search is performed using SGD optimization. The specific parameters used in the AutoDL implementation of SGD are provided in Table 2.
Cell Structure Representation
The result of the cell search in the NAS-GDAS-JAE model is a DAG representing several linear layers of different sizes (based on our defined cell operations). The AutoDL framework in which we implemented NAS-GDAS-JAE represents a DAG instance using a string to define the specific cell operations and sums of feature maps. Figure 3 illustrates the string output of an example DAG, which is This summands in the string represent the sums of the feature maps associated with different cell operations. Each sum is defined inside the "| |" delimiters, where each cell operation and the edge source node is listed. For example, the summand in the example above of "|25~0|50~1|50~2|" represents the sum of the feature maps of three cell operations (i.e., linear layers) at node 3 as depicted in the image-the green edge (size 25) from node 0, the blue edge (size 50) from node 1, and the blue edge (size 50) from node 2.
Results
In this section, we present the results of our experiments leveraging multimodal learning models and neural architecture search to address the question of software flaw detection.
Optimized Cell Structure of NAS-GDAS-JAE Models
The optimized cell structure of the NAS-GDAS-JAE models for each of the Juliet Test Suite data sets can be found in Table 3. Note that none of the final cell structures across the difference data sets are the same. The differences in cell structures may be due to the fact that the cell search is a global optimization problem, but the SGD method is only guaranteed to find a local optimizer. Or this may be due to the differences between the data associated with the different flaw types. More work is needed to better understand the source for these differences. To illustrate some of the differences, we present plots of the convergence behaviors of the cell search (search) and macro skeleton architecture (eval) optimizations in Appendix A. Over 100 epochs, we see a wide range of behaviors, maximum accuracy values achieved, and search/eval differences across the various data sets. More work is needed to better understand how these convergence behaviors impact the flaw detection results in general. Table 4 shows the flaw detections results using the three models descried above. The two JAE-Mixing-N models (with N = 50 and N = 100) are considered baselines for the NAS-GDAS-JAE model, as they use the manually-designed architecture described in previous results [5]. The results listed in the table are the sample means and sample standard standard deviations of the class averaged accuracy per Juliet Test Suite data set. The boldfaced results indicate the best mean class-averaged accuracy for each data set (i.e., per row). Note that many of the differences between the means are not separated by more than a single sample standard deviation (across methods/columns), and thus the improvements using NAS may not be statistically significant. More work is need to determine if these improvements generalize and are statistically significant.
Conclusions
In this work, we implemented a cell-based neural architecture search strategy to improve upon a manually-designed multimodal learning model for software flaw detection. Our results indicate that NAS leads to improved multimodal models that are specific to the software data being analyzed. These preliminary results provide a starting point for leveraging NAS for such a problem, as there are many open questions that still need to be addressed. In the work presented here, we used a cell that replaces only a small part of the JAE Classifier Model from [5]. However, larger, more complicated cells could lead to more pronounced improvements, but this would come at increased optimization and training cost as well. Determining the trade-offs between cell complexity and computational cost could be a useful research activity.
|
2020-09-23T01:01:15.795Z
|
2020-09-01T00:00:00.000
|
{
"year": 2020,
"sha1": "a41aba047c22af08779ea9dd428563b2d6695506",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/2009.10644",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "a41aba047c22af08779ea9dd428563b2d6695506",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Mathematics"
]
}
|
118536872
|
pes2o/s2orc
|
v3-fos-license
|
Magic wavelengths for the np-ns transitions in alkali-metal atoms
Extensive calculations of the electric-dipole matrix elements in alkali-metal atoms are conducted using the relativistic all-order method. This approach is a linearized version of the coupled-cluster method, which sums infinite sets of many-body perturbation theory terms. All allowed transitions between the lowest ns, np_1/2, np_3/2 states and a large number of excited states are considered in these calculations and their accuracy is evaluated. The resulting electric-dipole matrix elements are used for the high-precision calculation of frequency-dependent polarizabilities of the excited states of alkali-metal atoms. We find magic wavelengths in alkali-metal atoms for which the ns and np_1/2 and np_3/2 atomic levels have the same ac Stark shifts, which facilitates state-insensitive optical cooling and trapping.
I. INTRODUCTION
Recent progress in manipulation of neutral atoms in optical dipole traps offers advancement in a wide variety of applications. One such application is toward the quantum computational scheme, which realizes qubits as the internal states of trapped neutral atoms [1]. In this scheme, it is essential to precisely localize and control neutral atoms with minimum decoherence. Other applications include the next generation of atomic clocks, which may attain relative uncertainty of 10 −18 , enabling new tests of fundamental physics, more accurate measurements of fundamental constants and their time dependence, further improvement of Global Positioning System measurements, etc.
In a far-detuned optical dipole trap, the potential experienced by an atom can be either attractive or repulsive depending on the sign of the frequency-dependent Stark shift (ac Stark shift) due to the trap light. The excited states may experience an ac Stark shift with an opposite sign of the ground state Stark shift, affecting the fidelity of the experiments. A solution to this problem was proposed by Katori et al. [2], who suggested that the laser can be tuned to a magic wavelength λ magic , where lattice potentials of equal depth are produced for the two electronic states of the clock transition. In their experiment, they demonstrated that a λ magic exists for the 1 S 0 − 3 P 0 clock transition of 87 Sr in an optical lattice. Four years later, McKeever et al. [3] demonstrated state-insensitive trapping of Cs atoms at λ magic ≈ 935 nm while still maintaining a strong coupling for the 6p 3/2 − 6s 1/2 transition. The ability to trap neutral atoms inside high-Q cavities in the strong coupling regime is of particular importance to the quantum computation and communication schemes [3].
In this paper, we evaluate the magic wavelengths in Na, K, Rb, and Cs atoms for which the ns ground state and either of the first two np j excited states experience the same optical potential for state-insensitive cooling and trapping. We accomplish this by matching the ac polarizabilities of the atomic ns and np j states. We conduct extensive calculations of the relevant electric-dipole matrix elements using the relativistic all-order method and evaluate the uncertainties of the resulting ac polarizabilities. We also study the ac Stark shifts of these atoms to determine the dependence of λ magic on their hyperfine structure.
The paper is organized as follows. In section II, we give a short description of the method used for the calculation of the ac polarizabilities and list our results for scalar and tensor ac polarizabilities. In section III, we discuss the effect of ac Stark shifts on the hyperfine structure of the alkali-metal atoms. In section IV, we discuss the magic wavelength results for each of the atoms considered in this work.
II. DYNAMIC POLARIZABILITIES
We begin with an outline of calculations of the ac Stark shift for linearly polarized light, following Refs. [4,5]. Angels and Sandars [4] discussed the methodology for the calculation of the Stark shift and parameterization of the Stark shift in terms of the scalar and tensor polarizabilities. Stark shifts are obtained as the energy eigenvalues of the Schrödinger equation with interaction operator V I given by where ǫ is the applied external electric field and d is the electric-dipole operator. The first-order shift associated with V I vanishes in alkali-metal atoms. Therefore, the Stark shift ∆E of level v is calculated from the second-order expression where the sum over k includes all intermediate states allowed by electric-dipole transition selection rules, and E k is the energy of the state k.
Using the Wigner-Eckart theorem, one finds that ∆E can be written as the sum [6] where α 0 (ω) and α 2 (ω) are the scalar and tensor ac polarizabilities, respectively, of an atomic state v. The laser frequency ω is assumed to be several linewidths off-resonance. Here, the polarization vector of the light defines the z direction. The scalar ac polarizability α 0 (ω) of an atom can be further separated into an ionic core contribution α core (ω) and a valence contribution α v 0 (ω). The core contribution has a weak dependence on the frequency for the values of ω relevant to this work. Therefore, we use the static ionic core polarizability values calculated using the random-phase approximation (RPA) in Ref. [7]. The valence contribution α v 0 (ω) to the static polarizability of a monovalent atom in a state v is given by [8] α where k d v is the reduced electric-dipole (E1) matrix element. The experimental energies E i of the most important states i which contribute to this sum have been compiled for the alkali atoms in Refs. [9,10,11]. Unless stated otherwise, we use atomic units (a.u.) for all matrix elements and polarizabilities throughout this paper: the numerical values of the elementary charge, e, the reduced Planck constant,h = h/2π, and the electron mass, m e , are set equal to 1. The atomic unit for polarizability can be converted to SI units via u.], where the conversion coefficient is 4πǫ 0 a 3 0 /h and the Planck constant h is factored out.
The tensor ac polarizability α 2 (ω) is given by [12]: The ground state ac polarizabilities of alkali-metal atoms have been calculated to high precision [13]. However, no accurate systematic study of the ac polarizabilities of the excited states of alkali-metal atoms is currently available. The polarizability calculations for the excited np states are relatively complicated because in addition to p − s transitions they also involve p − d transition matrix elements. The matrix elements involving the nd states are generally more difficult to evaluate accurately, especially for the heavier alkalies.
In this work, we calculate np − n ′ d transition matrix elements using the relativistic all-order method [14,15] and use these values to accurately determine the np 1/2 and np 3/2 state ac polarizabilities. In the relativistic allorder method, all single and double (SD) excitations of the Dirac-Fock (DF) wave function are included to all orders of perturbation theory. For some matrix elements, we found it necessary to also include single, double and partial triple (SDpT) excitations into the wave functions (SDpT method). We conduct additional semi-empirical scaling of our all-order SD and SDpT values where we expect scaled values to be more accurate or for more accurate evaluation of the uncertainties. The scaling procedure has been described in Refs. [8,15,16].
We start the calculation of the np state valence polarizabilities using Eqs. (4) and (5). For the wavelength range considered in this work, the first few terms in the sums over k give the dominant contributions. Therefore, we can separate the np state valence polarizability into a main part, α main , that includes these dominant terms, and a remainder, α tail . We use a complete set of DF wave functions on a nonlinear grid generated using Bsplines [17] in all our calculations. We use 70 splines of order 11 for each value of the angular momentum. A cavity radius of 220 a.u. is chosen to accommodate all valence orbitals of α main . In our K and Rb calculations, we include all ns states up to 10s and all nd states up to 9d; 11s, 12s, and 10d are also added for Cs. Such a large number of states is needed to reduce uncertainties in the remainder α tail . We use the experimental values compiled in Ref. [18] along with their uncertainties for the first np − ns matrix elements, for example the 5p j − 5s matrix elements in Rb. We use the SD scaled values for some of the np − n ′ d and np − n ′ s matrix elements in the cases where it was essential to reduce the uncertainty of our calculations and where the scaling is expected to produce more accurate results based on the type of the dominant correlation corrections. This issue is discussed in detail in Refs. [19,20] and references therein.
In Table I, we give the contributions to the scalar and tensor polarizabilities of the Rb 5p 3/2 state at 790 nm to illustrate the details of the calculation. The absolute values of the corresponding reduced electric-dipole matrix elements, d, used in the calculations are also given. The contributions from the main term are listed separately. We also list the resonant wavelengths λ res corresponding to each transition to illustrate which transitions are close to 790 nm. As noted above, we use the experimental values for the 5p 3/2 − 5s matrix element from the Ref. [18]. We use the recommended values for the 5p 3/2 − 4d j transitions derived from the Stark shift measurements [21] in Ref. [22]. We find that the contribution of the 5p 3/2 − 5s transition is dominant since the wavelength of this transition (λ res = 780 nm) is the closest to the laser wavelength. The next dominant contribution for the scalar polarizability is from the 5p 3/2 − 5d 5/2 transition (λ res = 776 nm). While the contribution from this transition is less than one tenth of the dominant contributions, it gives the dominant contribution to the final uncertainty owing to a very large correlation correction to the 5p 3/2 − 5d 5/2 reduced electric-dipole matrix element. In fact, the lowest-order DF value for this transition is only 0.493 a.u. while our final (SD scaled) value is 1.983 a.u. We take the uncertainty in this transition to be the maximum difference of our final values and ab initio SDpT and scaled SDpT values. While the 5p 3/2 − 5d 3/2 transition has almost the same transition wavelength owing to the very small fine-structure splitting of the 5d state, the corresponding contribution is nine times smaller owing to the fact that the 5p 3/2 −5d 3/2 reduced electric-dipole matrix element is smaller than the 5p 3/2 − 5d 5/2 matrix element by a factor of three. As expected, the contributions from the core and tail terms are very small in comparison with the total polarizability values at this wavelength.
In Table II, we compare our results for the first excited np 1/2 and np 3/2 state static polarizabilities for Na, K, Rb, and Cs with the previous experimental and theoretical studies. The measurements of the ground state static [27], e Ref. [28], f Ref. [29], g derived from Ref. [21] D1 line Stark shift measurement and ground state polarizability measurement from Ref. [30], h derived from Ref. [31] D2 line Stark shift measurement and ground state polarizability from Ref. [30]. Units: polarizability of Na by Ekstrom et al. [24] were combined with the experimental Stark shifts from Refs. [25,32] to predict precise values for the 3p 1/2 and 3p 3/2 scalar polarizabilities [24]. The tensor polarizability of the 3p 3/2 state of Na has been measured by Windholz et al. [25]. The Stark shift measurements for K and Rb have been carried out by Miller et al. [26] for D1 lines and by Krenn et al. [28] for D2 lines. We have combined these Stark shift measurements with the recommended ground state polarizability values from Ref. [27] to obtain the np j polarizability values that we quote as experimental results. The np 3/2 tensor polarizabilities in K and Rb were measured in Ref. [28]. Accurate D1 and D2 Stark shift measurements for Cs have been reported in Refs. [21,31]. The most accurate experimental measurement of the 6s ground state polarizability from Ref. [30] has been used to derive the values of the 6p 1/2 and 6p 3/2 state polarizabilities in Cs quoted in Table II. Our results are in excellent agreement with the experimental values.
We note that we use our theoretical values for the 4p − 3d transitions in K and 5p − 4d transitions in Rb to establish the accuracy of our approach. We use more accurate recommended values for these transitions derived from the experimental Stark shifts [21] in Ref. [22] in all other calculations in this work, as described in the discussion of Table I.
III. AC STARK EFFECT FOR HYPERFINE LEVELS
In the above discussion, we neglected the hyperfine structure of the atomic levels. However, it is essential to include the hyperfine structure which is affected by the presence of external electric field for practical applications discussed in this work. In this section, we calculate the eigenvalues of the Hamiltonian H representing the combined effect of Stark and hyperfine interactions. Then, we subtract the hyperfine splitting from the above eigenvalues to get the ac Stark shift of a hyperfine level. This value is used to calculate the ac Stark shift of the transition from a hyperfine level of excited np state to a hyperfine level of ground ns state.
A. Matrix elements of the Stark operator
First, we evaluate the matrix elements of Stark operator in the hyperfine basis. The energy difference between two hyperfine levels is relatively small for cases considered in this work, and the hyperfine levels are expected to mix even if small electric fields are applied. Therefore, the Stark operator now has non-zero off-diagonal matrix elements. The general equation for the matrix elements is given by where |IjF M represents atomic states in hyperfine basis, I is the nuclear spin, and F = I + j.
The interaction operator V I given by Eq. (1) does not affect the nuclear spin I. In addition, the shifts due to V I are not high enough to cause mixing between two levels with different angular momentum j. As a result, ∆E F,F ′′ is diagonal in I and j. These approximations enable us to label the states in hyperfine basis as |F M , and Eq. (6) can be simplified as (7) One can write the above matrix element as where the Stark shift operator V II is defined in terms of λ operator as If the applied electric field is in the z direction, then the energy shifts are diagonal in M . Thus, the matrix elements can be written as We use the Wigner-Eckart theorem to carry out the angular reduction, i.e. sum over the magnetic quantum numbers. Then, the matrix elements can be written in terms of scalar and tensor polarizabilities as The first term in the equation above, containing the scalar polarizability, results in the equal shifts of all of the hyperfine levels and is non-zero only for the diagonal matrix elements (F = F ′′ ). The tensor part mixes states of different F through Q operator. The non-zero matrix elements of Q are For each magnetic sublevel, there is a matrix with rows and columns labeled by F and F ′′ . Therefore, magnetic sublevels with different values of |M | are shifted by a different amount. A detailed discussion of this matrix is given by Schmieder [5].
B. Energy eigenvalues
Since the Stark interactions considered in this work are comparable to the hyperfine interactions, we find the combined shift of a hyperfine level by diagonalizing the Hamiltonian given by where V hfs is the hyperfine interaction operator. In the hyperfine basis, V hfs is diagonal with the following matrix elements [33] where z = F (F + 1)− I(I + 1)− j(j + 1), and A and B are hyperfine-structure constants [34]. The matrix elements of H which describe the combined effect of the Stark interaction V II and hyperfine interaction V hfs are given by Using Eq. (12) and Eq. (15), the above matrix elements can be reduced to a more useful form The combined shift of a hyperfine level is evaluated by diagonalizing the matrix formed with V F,F ′′ ;M . The resulting diagonal matrix element (∆E F,F ) corresponds to the shift in a hyperfine level F , resulting from two effects: the hfs interaction V hfs and the Stark effect V II . Consequently, we should subtract the hyperfine splitting from the these shifts to get the ac Stark shift of a level given by The ac Stark shift of the transition from an excited state to the ground state ∆E(n ′ l ′ j ′ F ′ M ′ → nl j F M ) is determined as the difference between the ac Stark shifts of the two states. We calculate the magic wavelength where the ac Stark shift of the np − ns transition is equal to zero. The results of the calculation are presented in the next section.
IV. MAGIC WAVELENGTHS FOR THE np − ns TRANSITIONS
We define the magic wavelength λ magic as the wavelength where the ac polarizabilities of the two states are the same, leading to zero ac Stark shift for a corresponding transition. For np − ns transitions considered in this work, it is found at the crossing of the ac polarizability curves for the ns and np states. In the case of the np 3/2 − ns transitions, the magic wavelengths need to be determined separately for the cases with m j = ±1/2 III: Magic wavelengths λmagic above 500 nm for the 3p 1/2 − 3s and 3p 3/2 − 3s transition in Na and the corresponding values of polarizabilities at the magic wavelengths. The resonant wavelengths λres for transitions contributing to the 3pj ac polarizabilities and the corresponding absolute values of the electric-dipole matrix elements are also listed. The wavelengths (in vacuum) are given in nm and electric-dipole matrix elements and polarizabilities are given in atomic units. and m j = ±3/2 owing to the presence of the tensor contribution to the total polarizability of np 3/2 state. According to Eq. (3), the total polarizability for the np 3/2 states is determined as α = α 0 − α 2 for m j = ±1/2 and α = α 0 + α 2 for m j = ±3/2. The uncertainties in the values of magic wavelengths are found as the maximum differences between the central value and the crossings of the α ns ± δα ns and α np ± δα np curves, where the δα are the uncertainties in the corresponding ns and np polarizability values. We also study λ magic for transitions between particular np 3/2 F ′ M ′ and nsF M hyperfine sublevels. The ac Stark shifts of the hyperfine sublevels of an atomic state are calculated using the method described in the previous section. In alkali-metal atoms, all magnetic sublevels have to be considered separately; therefore, a λ magic is different for the np F ′ M ′ − ns F M transitions. We include several examples of such calculations.
We calculated the λ magic values for np 1/2 − ns and np 3/2 − ns transitions for all alkali atoms from Na to Cs.
As a general rule, we do not list the magic wavelengths which are extremely close to the resonances. Below, we discuss the calculation of the magic wavelengths separately for each atom. The figures are presented only for np 3/2 states as they are of more experimental relevance. All wavelengths are given in vacuum.
A. Na
We list the magic wavelengths λ magic above 500 nm for the 3p 1/2 −3s and 3p 3/2 −3s transitions in Na and the corresponding values of polarizabilities at the magic wavelengths in Table III. For convenience of presentation, we also list the resonant wavelengths λ res for transitions contributing to the 3p 1/2 and 3p 3/2 ac polarizabilities and the corresponding values of the electric-dipole matrix elements along with their uncertainties. Only two transitions contributing to the ground state polarizabilities are above 500 nm, 3p 1/2 −3s and 3p 3/2 −3s. Therefore, there is no need to separately list resonant contributions to the ground state polarizability. To indicate the placement of the magic wavelength, we order the lists of the resonant and magic wavelengths to indicate their respective placement. The polarizabilities and their uncertainties are calculated as described in Section II. The transitions up to np − 3s and 3p − nd, n = 6 are included into the main term and the remainder is evaluated in the DF approximation. The values of the 3p − 3s matrix elements are taken from [18], the remaining matrix elements are either SD or SD scaled values. The uncertainties in the values of the Na matrix elements were estimated to be generally very small. The resonant wavelength values are obtained from energy levels from National Institute of Standards and Technology (NIST) database [9]. We assume no uncertainties in the energy values for all elements.
Since the 3s polarizability has only two resonant transitions at wavelengths greater than 500 nm, it is generally small except in close vicinity to those resonances. Since the polarizability of the 3p 1/2 state has several contributions from the resonant transitions in this range, it is generally expected that it crosses the 3s polarizability in between of the each pair of resonances listed in Table III unless the wavelength is close to 3p−3s resonances. The same is expected in the case of the 3p 3/2 polarizability for the |m j | = 1/2 cases as described by Eq. (3) (α = α 0 − α 2 ). However, when α = α 0 + α 2 (m j = ±3/2 in Eq. 3) all 3p 3/2 − ns transitions do not contribute to the total polarizability owing to the exact cancellation of the scalar and tensor contributions for v = 3p 3/2 and k = ns in Eqs. (4) and (5). In this case, the angular factor in Eq. (5) is exactly −2/3(2j v + 1) leading to exact cancellation of such terms, and the total polarizability comes from the remaining 3p − nd contributions which do not cancel out. As a result, there are no resonances for the m j = ±3/2 cases at the wavelengths corresponding to 3p 3/2 − ns transitions leading to substantial reduction in the number of magic wavelengths. We note that there is a magic wavelength at the 589.557 nm owing to the resonances in the ground state polarizability. The corresponding polarizability value is very small making this case of limited practical use. While there has to be a magic wavelength between 3p 3/2 −4d 3/2 and 3p 3/2 −4d 5/2 resonances at 568.98 nm, we are not listing it owing to a very small value of the 4d fine-structure splitting. We illustrate the magic wavelengths for the 3p 3/2 − 3s transition in Fig. 1 where we plot the values of the ac polarizabilities for the ground and 3p 3/2 states.
It is interesting to consider in more detail the region close to the 3p j − 3s resonances since in this case one magic wavelength is missing for both 3p 1/2 − 3s and 3p 3/2 − 3s α 0 − α 2 cases, one on the side of each 3p − 3s resonance as evident from Table III. We plot the ac polarizabilities for the 3s, 3p 1/2 , and 3p 3/2 m j states in this region in Fig. 2. The placements of the 3p 1/2 − 3s and 3p 3/2 − 3s resonances are shown by vertical lines. In the case of the 3p 1/2 state, the 3p 1/2 − 3s resonance contributes to both ground state and 3p 1/2 polarizabilities. As a result, both of these polarizabilities are large but have opposite sign right of the 3p 1/2 − 3s resonance at 589.76 nm leading to missing magic wavelength for 3p 1/2 −3s transition between the 3p 1/2 −5s and 3p 1/2 −3s resonances. In the 3p 3/2 − 3s α 0 − α 2 case, there is a missing magic wavelength to the left of the 3p 3/2 − 3s 589.12 nm resonance for the same reason. The values of the α 0 + α 2 for the 3p 3/2 state are very small and negative in that entire region owing to the cancellations of the 3p − 3s contributions in the scalar and tensor 3p 3/2 polarizabilities described above.
In summary, there is only one case for Na in the considered range of the wavelengths where the magic wavelength exists for all sublevels (567nm) at close values of the polarizabilities (-2000 a.u.) The ac polarizabilities for the 3s and 3p 3/2 states near this magic wavelength are plotted in Fig. 3. The plot of the ac Stark shift for the transition between the hyperfine sublevels near 567 nm is shown in Fig. 4. The λ magic is found at the point where the ac Stark shift of the transition from 3p 3/2 F ′ = 3M ′ sub levels to 3sF M sub levels crosses zero. This crossing of ac Stark shift curve occurs close to 567 nm which is close to the wavelength predicted by the crossing of polarizabilities illustrated by Fig. 3, as expected.
B. K
The magic wavelengths λ magic above 600 nm for the 4p 1/2 − 4s and 4p 3/2 − 4s transitions in K are listed in Table IV. Table IV is structured in exactly the same way as Table III. The electric-dipole matrix elements for the 4p − 4s transitions are taken from [18], and the electricdipole matrix elements for the 4p − 3d are the recommended values from Ref. [22] derived from the accurate Stark shift measurements [26]. The resonant wavelengths are obtained from the energy levels compiled in the NIST database [9]. The transitions up to 4p − 10s and 4p − 9d are included into the main term of the polarizability, and the remainder is evaluated in the DF approximation. In the case of some higher states, such as 9s, we did not evaluate the uncertainties of the matrix elements where we expect them to be small (below 0.5%). As a result, the uncertainties in the values of the magic wavelengths near these transitions do not include these contributions and may be slightly larger than estimated. In the test case of Rb, the uncertainties are evaluated for all transitions with resonant wavelengths above 600 nm and no significant differences in the uncertainties of the relevant magic wavelengths with other elements are observed.
The main difference between the Na and K calculation is extremely large correlation correction to the values of the 4p − 4d transitions. The correlation correction nearly exactly cancels the lowest-order DF value leading to a value that is essentially zero within the accuracy of this calculation. As a result, we do not quote the values for the magic wavelength between 4p − 4d and 4p − 6s resonances. We note that these two resonances are very closely spaced (1.5 nm), thus probably making the use of such a magic wavelength impractical. Our present calculation places the magic wavelength for the 4p 1/2 − 4s transition in the direct vicinity(within 0.01 nm) of the 693.82 nm resonance. We note that the measurement of the ac Stark shift (or the ratio of the 4s to 4p Stark shifts) near the 4p − 4d resonance may provide an excellent benchmark test of atomic theory. This problem of the cancellation of the lowest and higher-order terms for the np − nd transitions is unique to K. In the case of Rb, the correlation for the similar 5p − 5d transition is very large but adds coherently to the DF values. As a result, we were able to evaluate the corresponding Rb 5p − 5d matrix elements with 4.5% accuracy. The accuracy is further improved for the 6p − 6d transitions in Cs.
We also located the magic wavelengths for the 4p 3/2 − 4s transition between 4p 3/2 − 3d 3/2 and 4p 3/2 − 3d 5/2 resonances, but found that m j = ±1/2 curve crosses the 4s polarizability very close (within 0.002 nm) to the resonance. Therefore, we do not list this crossing in Table IV. We note that m j = ±3/2 curve crosses the 4s polarizability curve further away from resonance at 1177.35 nm.
The polarizability values for both of these crossings is 500 a.u. VI: Magic wavelengths λmagic above 600 nm for the 5p 3/2 − 5s transition in Rb and 6p 3/2 − 6s transitions in Cs and the corresponding values of polarizabilities at the magic wavelengths. The resonant wavelengths λres for transitions contributing to the npj ac polarizabilities and the corresponding absolute values of the electric-dipole matrix elements are also listed. The wavelengths (in vacuum) are given in nm and electric-dipole matrix elements and polarizabilities are given in atomic units.
C. Rb
We list the magic wavelengths λ magic above 600 nm for the 5p 1/2 − 5s transition in Rb and the 6p 1/2 − 6s transition in Cs in Table V. In this case, all Rb 5p 1/2 − nl j resonances have significant spacing allowing us to determine the corresponding magic wavelengths. The magic wavelengths above 600 nm for the 5p 3/2 − 5s transition in Rb and 6p 3/2 − 6s transition in Cs are grouped together in Table VI. The transitions up to 5p − 10s and 5p − 9d are included in the main term calculation of the Rb 5p polarizabilities and the remainder is evaluated in the DF approximation. The 5p − 5s matrix elements are taken from Ref. [18], and the 5p−4d E1 matrix elements are the recommended values derived from the Stark shift measurements [26] in Ref. [22]. As we discussed in Section II, the correlation correction is very large for the 5p − 5d transitions; the DF values for the 5p 3/2 − 5d 5/2 transition is 0.5 a.u. while our final value is 2.0 a.u. However, nearly entire correlation correction to this value comes from the single all-order term which can be more accurately estimated by the scaling procedure described in Refs. [8,15,16]. To evaluate the uncertainty of these values, we also conducted another calculation including the triple excitations relevant to the correction of the dominant correlation term (SDpT method), and repeated the scaling procedure for the SDpT calculation. We took the spread of the final values and the SDpT ab initio and SDpT scaled values to be the uncertainty of the final numbers. Nevertheless, even such an elaborate calculation still gives an estimated uncertainty of 4.5%.
We illustrate the λ magic for the 5p 3/2 − 5s transition near 791 nm in Fig. 5. We note that this case is different from that of Na illustrated in Fig. 3, where both α 0 + α 2 and α 0 − α 2 curves for the 3p 3/2 polarizability cross the 3s polarizability curve at approximately the same polarizability values. In the Rb case near 791 nm, α 0 + α 2 and α 0 − α 2 curves for the 5p 3/2 polarizability cross the 5s polarizability curve at 125 a.u. and -6910 a.u., respectively. As a result, the |M ′ | = 3 curve on the ac Stark shift plot for the transition between hyperfine sub levels shown in Fig. 6 is significantly split from the curves for the other sublevels.
The magic wavelengths for the 5p 3/2 −5s transition between the fine-structure components of the 5p−nd j levels are not listed owing to very small fine structures of these levels. We note that crossings for all m j sublevels should be present between the fine-structure components of the 5p − nd j lines. We illustrate such magic wavelengths for Cs, which has substantially larger nd j fine-structure splittings.
D. Cs
Our results for Cs are listed in Tables V and VI. The values of the 6p − 6s matrix elements are taken from [35], and the values for 6p − 7s transitions are taken from the results compiled in [15] (derived from the 7s lifetime value). We derived the 6p 1/2 − 5d 3/2 value from the experimental value of the D1 line Stark shift in Cs [21] combined with the experimental ground state polarizability value from [30]. The procedure for deriving the matrix element values from the Stark shifts is described in Ref. [22]. We use the theoretical values of the ratios of the 6p 1/2 − 5d 3/2 , 6p 3/2 − 5d 3/2 , and 6p 3/2 − 5d 5/2 values from Ref. [8] to obtain the values for the 6p 3/2 −5d 3/2 and 6p 3/2 − 5d 5/2 matrix elements. We use the experimental energy levels from [10,11,36], and references therein to obtain the resonance wavelength values. The transitions up to 6p − 12s and 6p − 9d are included into the main term calculation of the polarizabilities and the remainder is evaluated in the DF approximation.
We find that there are no magic wavelengths for the 6p 1/2 − 6s transition in between the 6p 1/2 − 6s, 6p 1/2 − 6d 3/2 , and 6p 1/2 − 8s resonances whereas there are the magic wavelengths in between the corresponding resonances in Rb. The difference between the Rb and Cs cases is in the placement of the 6p 3/2 − 6s resonance in Cs and 5p 3/2 − 5s resonance in Rb. In Rb, 5p 3/2 − 5s resonance is at 780 nm and follows the 5p 1/2 − 5s one. In Cs, the 6p 3/2 − 6s resonance is at 852 nm and is located in between the 6p 1/2 − 6d 3/2 and 6p 1/2 − 8s resonances owing to much larger 6p fine-structure splitting. As a result, there are no magic wavelengths in this range. Also unlike the Rb case, the magic wavelengths around 935 nm for the 6p 3/2 − 6s transition in Cs correspond to similar values of the polarizability (about 3000 a.u.) for all sublevels as illustrated in Fig. 7. The nearest resonances to this magic wavelength are 6p 3/2 − 6d j ones; therefore the contributions from these transitions are dominant. To improve the accuracy of these values, we conducted a more accurate calculation for these transitions following the 5p − 5d Rb calculation described in the previous subsection. As a result, we expect our values of the 6p − 6d matrix elements to be more accurate than the one quoted in Ref. [8]. Nevertheless, the uncertainties in the values of the corresponding magic wavelengths are quite high because the 6s and 6p 3/2 polarizability curves cross at very small angles. As a result, even relatively small uncertainties in the values of the polarizabilities propagate into significant uncertainties in the values of the magic wavelengths. Our values for these magic wavelengths are in good agreement with previous studies [3,37]. The ac Stark shift of the 6p 3/2 F ′ = 5M ′ to 6sF M transition as a function of wavelength at the 925 − 945 nm range is plotted in Fig. 8.
V. CONCLUSION
We have calculated the ground ns state and np state ac polarizabilities in Na, K, Rb, and Cs using the relativistic all-order method and evaluated the uncertainties of these values. The static polarizability values were found to be in excellent agreement with previous experimental and theoretical results. We have used our calculations to identify the magic wavelengths at which the ac polarizabilities of the alkali-metal atoms in the ground state are equal to the ac polarizabilities in the excited np j states facilitating state-insensitive cooling and trapping.
VI. ACKNOWLEDGMENTS
We gratefully acknowledge helpful discussions with Fam Le Kien. This work was performed under the sponsorship of the National Institute of Standards and Technology, U.S. Department of Commerce.
|
2007-09-02T21:26:16.000Z
|
2007-09-02T00:00:00.000
|
{
"year": 2007,
"sha1": "042ca0fef6d069aa3c132c87cec0267caa7a6c64",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/0709.0130",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "042ca0fef6d069aa3c132c87cec0267caa7a6c64",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
240154101
|
pes2o/s2orc
|
v3-fos-license
|
ICER report on Alzheimer’s disease: implications from a patient perspective
DISCLOSURES: No funding was received for the writing of this commentary. The author has nothing to disclose.
Alzheimer's disease (AD) affects nearly 6 million people in the United States and is expected to affect more than 14 million people by 2060. 1,2 The cost of caring for patients with AD was estimated to be $305 billion in 2020 and is expected to increase to more than $1 trillion as the population ages. 1 On June 7, 2021, the US Food and Drug Administration (FDA) approved Aduhelm (aducanumab) for the treatment of AD with mild cognitive impairment or mild dementia stage of disease through the accelerated approval pathway based on the reduction of amyloid beta plaques observed in patients treated with the drug. 3,4 Aducanumab is administered as an intravenous infusion every 4 weeks, at a cost of $56,000 per year. 4 The Institute for Clinical and Economic Review (ICER) recently conducted an assessment of aducanumab for AD. 5 This commentary focuses on the findings of the assessment and policy recommendations and their implications from a patient perspective.
ICER Report and Recommendations
The ICER assessment was conducted from a payer and a societal perspective and used the 2 identical phase 3 randomized clinical trials conducted by Biogen-ENGAGE and EMERGE. The ICER review committee unanimously found the evidence to be insufficient to determine the net health benefit of aducanumab, given the uncertainty of benefits, and certainty that harms can occur in patients treated with aducanumab. 5 The annual health-benefit price benchmark as determined by ICER ranged from $3,000 to $8,400, depending on the threshold and perspective, which subsequently means that an 85%-95% discount from the listed $56,000 annual price would be required to reach the threshold price as set by the assessment. 5 Several mitigating factors precluded a straightforward assessment of aducanumab. Both clinical trials randomized patients with early AD to low-or high-dose aducanumab or placebo, and the primary clinical outcome was change in the mean score on the Clinical Dementia Rating Scale-Sum of Boxes (CDR-SB). Aducanumab effectively removed beta amyloid at all doses, but both trials were terminated following a prespecified interim analysis for futility. A potential positive treatment effect was noted in the EMERGE trial, although the definition of improvement on the CDR-SB has not been solidified. Results from the ENGAGE trial failed to yield any improvement on the CDR-SB in the high-dose group when compared with placebo. 5 Further confounding matters was the FDA's role in the regulatory process. As stated by ICER in its policy recommendations, the FDA should "set a clearer regulatory framework in place by specifying a threshold range for amyloid clearance that will be accepted going forward as 'reasonably likely' to provide patient benefit." 6 The FDA advisory panel that was convened voted against approval, but further deliberation by the FDA resulted in approval, not on the basis of the interpretation of the clinical outcomes data from the trials, but by pivoting and considering amyloid clearance to be a surrogate endpoint now considered "reasonably likely" to lead to a benefit. This shift was controversial since no data were ever disclosed to show the correlation of amyloid clearance with cognitive outcomes from the clinical trials, and the accelerated approval pathway was leveraged to approve a drug in a therapeutic area where clinical outcomes measures do exist and can be measured in relatively short clinical trials. 6 In its assessment, ICER pooled the results of the 2 trials, which under ordinary circumstances may have raised concern from a methodological perspective, but given the existing complexities underlying the trials and the interpretation of the results, this likely did not affect the final conclusions drawn by the committee. Furthermore, ICER performed sensitivity analyses and did a thorough examination of all of the evidence presented from the trials.
The cost-effectiveness of aducanumab in addition to supportive care was evaluated against supportive care alone. From a health care system perspective, the percentage of patients that could be treated before crossing the ICER potential budget impact threshold of $819 million per year is 2.5% (approximately 35,000 of the 1.4 million AD patients eligible for treatment with aducanumab). These results prompted ICER to issue an access and affordability alert for aducanumab. 5
Implications for Patients with AD HEALTH DISPARITIES
Health disparities exist in the US health care system, and patients with AD are no exception. Black and Hispanic patients are 1.5 to 2 times more likely to develop AD, yet in the ENGAGE and EMERGE trials, the patient population was remarkably homogenous, since 0.6% of patients were Black and 1.5% were Hispanic. 5 ICER faced some criticism from the public comments for using this against aducanumab, but the population of the trials only serves to highlight the disparities in treatments. The American Academy of Neurology (AAN) also echoed ICER's concerns regarding the lack of racial and ethnic diversity in the clinical trial population, given the disproportionate impact of AD on Black and Hispanic patients.
It can be surmised that the lack of diversity in the trial populations is indeed problematic, since there are potential biomarker implications in AD that may vary by ethnicity. Evidence of the magnitude of the variance is still not well defined but appears to be a consideration that could carry ramifications for different populations. 7 Understanding amyloid clearance with aducanumab and the relationship to ethnicity could also be important from a clinical and safety perspective. ENGAGE and EMERGE combined had approximately 35% of patients who experienced amyloidrelated imaging abnormalities (ARIA) that ranged in severity from asymptomatic to having to discontinue treatment. 5 Because the trial populations were predominately White, any differences in safety attributable to ethnicity were unlikely detected.
APPROPRIATE USE AND EXPECTATIONS FROM ADUCANUMAB TREATMENT
The population studied in the trials was one of mild cognitive impairment or mild dementia. Initially, the FDA issued blanket approval for the drug but later narrowed the scope of the approved label. The Alzheimer's Association has shown support for the FDA's narrowing of the label. 8 ICER stated in its policy recommendations that, in order to keep patients and their families from being misled, a collaborative effort should be employed to characterize potential benefits of aducanumab for slowing the natural progression of the disease, and not as a cure or a treatment resulting in "improvement" or "return to quality of life." 6 Given the lack of treatment options for patients with AD, patients and caregivers are desperate for a viable option, and setting realistic expectations and ensuring that patients and caregivers are grounded in the limitations and expectations from treatment with aducanumab is paramount. Some providers have expressed concern regarding the approval and use of aducanumab, and several prominent medical centers have announced that they will not currently recommend aducanumab. 9-11 ACCESS Perhaps the largest hurdle facing patients with AD who may benefit from aducanumab is the annual price tag. As the AAN has pointed out in its public comments to ICER, patients with AD are likely to incur a significant amount of financial hardship given the recent trend in rising out-of-pocket costs for many neurologic medications. 5 Traditionally, coverage determinations for Medicare are made without taking cost into consideration. For drugs covered under Part B, Medicare reimburses 106% of the average sales price (ASP). For drugs such as aducanumab, where an ASP is not available, Medicare pays 103% of the wholesale acquisition cost (WAC), which is currently $56,000 per year for aducanumab. Based on an analysis by the Kaiser Foundation, nearly 2 million Medicare beneficiaries used 1 or more AD treatments covered under Part D in 2017. If even just a quarter of these patients are prescribed aducanumab (~500,000) at the reimbursement assumption of 103% of WAC costs, then the total spending of aducanumab in 1 year alone would be $29 billion, which far exceeds spending on any other drug covered under Part B or Part D (for context, total Medicare spending for all Part B drugs was $37 billion in 2019). Based on the current assumption for most Part B covered drugs and services for which Medicare pays 80% and beneficiaries are responsible for the remaining 20%, beneficiaries would 12 Centers for Medicare & Medicaid Services announced in July that it will conduct a National Coverage Determination (NCD) analysis for monoclonal antibodies directed against amyloids for the treatment of AD. 13 The results of the NCD will ultimately determine how or if aducanumab is covered. The Alzheimer's Association has stated its support for the NCD, as well as a coverage with evidence development (CED). 14
Summary
The lack of efficacy demonstrated in the clinical trial data coupled with the regulatory irregularities from the FDA have created a slippery slope for future approvals. The steep discount needed to reach the threshold price presents a conundrum for payers and patients with AD. The current approval of aducanumab may offer a glimmer of hope for a portion of patients with AD, but it comes with a steep price tag that patients may not be able to afford. However, it must be noted that this is the first drug targeted for AD to be approved in almost 20 years and may provide some learnings to be carried forward in the quest for future viable treatments for patients with AD.
DISCLOSURES
No funding was received for the writing of this commentary. The author has nothing to disclose.
|
2021-10-30T06:17:30.304Z
|
2021-11-01T00:00:00.000
|
{
"year": 2021,
"sha1": "9e15e782f5b4e6437a482c728430e122abfa117c",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": "CLOSED",
"pdf_src": "PubMedCentral",
"pdf_hash": "259ce82ef559234318b35eb85ba49e0d1635bf84",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
264348338
|
pes2o/s2orc
|
v3-fos-license
|
The Combination of the Lactate Dehydrogenase/Hemoglobin Ratio with the PLASMIC Score Facilitates Differentiation of TTP from Septic DIC Without Identification of Schistocytes
In some cases, differentiating thrombotic thrombocytopenic purpura (TTP) from septic disseminated intravascular coagulation (DIC) without measuring ADAMTS13 activity is critical for urgent lifesaving plasma exchange. To investigate whether PLASMIC score without identifying the presence of schistocytes, D-dimer, fibrin/fibrinogen degradation products (FDP), FDP/D-dimer ratio, prothrombin time-international normalized ratio (PT-INR), lactate dehydrogenase (LD), hemoglobin (Hb), and LD/Hb ratio are useful in differentiating patients with TTP from those with septic DIC. Retrospective analysis was conducted on the medical records of the patients with septic DIC (32 patients) or TTP (16 patients). The PLASMIC score and other laboratory measurements all were helpful in differentiating TTP from septic DIC. When dichotomized between high risk (scores 6–7) and intermediate–low risk (scores 0–5), the PLASMIC score predicted TTP with a sensitivity of 75.0% and a specificity of 100%. However, 4 of 16 patients with TTP and 19 of 32 patients with septic DIC showed comparable PLASMIC scores of 4 or 5, making it difficult to distinguish between the two by PLASMIC score alone. Among the measurements examined, the LDH/Hb ratio was the most useful for differentiation. Receiver operating characteristic analysis of the LD/Hb ratio for predicting TTP revealed a cutoff of 53.7 (IU/10 g) (sensitivity 0.94, specificity 0.91). If the LD/Hb ratio was less than 53.7, it was unlikely that the patient had TTP. A combination of the LD/Hb ratio and the PLASMIC score may be useful for distinguishing between TTP and DIC and identifying patients who need rapid plasma exchange or caplacizumab administration.
Introduction
Thrombotic thrombocytopenic purpura (TTP) is a thrombotic microangiopathy (TMA) resulting from severely diminished activity of the von Willebrand factor (VWF)-cleaving protease ADAMTS13.It is characterized by extensive intravascular thrombosis enriched with platelets, leading to thrombocytopenia, microangiopathic hemolytic anemia, and occasionally organ dysfunction. 1 Timely initiation of plasma exchange is crucial, as TTP is nearly always fatal without intervention, 2 and a deficiency in ADAMTS13 activity serves as a definitive diagnostic criterion. 3However, due to lengthy turnaround times and limited accessibility of the ADAMTS13 assay, initial diagnosis of TTP primarily relies on clinical manifestations.Distinguishing TTP from other forms of TMA, such as hemolytic uremic syndrome (HUS) and atypical HUS, is challenging since these conditions exhibit similar clinical symptoms.To address this issue, Bendapudi et al 4 developed the PLASMIC score, which utilizes readily available laboratory parameters to identify TTP patients without measuring ADAMTS13 activity.The PLASMIC score has been validated [5][6][7][8] and has demonstrated a remarkably high sensitivity (0.9-0.98) and specificity (0.46-0.92), making it a practical tool for selecting TTP patients from among TMA cases in clinical practice.Nonetheless, in real-world scenarios, clinicians unfamiliar with TTP diagnosis often face difficulties in differentiating TTP from sepsis-induced disseminated intravascular coagulation (DIC).This challenge arises due to the lack of specific laboratory tests for diagnosing DIC and the overlapping clinical manifestations, such as anemia, thrombocytopenia, bleeding, elevated D-dimer levels, multiple organ dysfunction, and occasionally the presence of fragmented red blood cells (schistocytes) seen in both TTP and septic DIC.Consequently, clinicians encounter delays in promptly and accurately identifying TTP patients, thus impeding the timely initiation of plasma exchange.Moreover, the PLASMIC score, employed to select TTP patients among those with TMA, defined by the presence of at least 1% schistocytes and platelet counts below 150 × 10 3 /μL 4 , can pose challenges in emergency situations, particularly during nighttime, when determining the presence of at least 1% schistocytes in peripheral blood becomes arduous.Therefore, we decided to investigate the possibility of differentiating TTP from DIC using PLASMIC score and some laboratory measurements without measuring ADAMTS13 activity and confirming schistocytes.
Institutional Review Board approval was obtained for this study (No. 2146).
Patients
This study included consecutive patients admitted to the Department of General Medicine in the Nara Medical University Hospital between December 2012 and February 2018, who were diagnosed with TTP or septic DIC by physicians in the Department of General Medicine.The diagnosis of patients was based on the Japanese Association for Acute Medicine (JAAM) criteria for DIC 9,10 and the International Society on Thrombosis and Haemostasis (ISTH) consensus on the definition of TTP. 3 Retrospective analysis was conducted on the medical records of these patients.
ADAMTS13 Activity
The activity of ADAMTS13 and its inhibitor were measured in TTP patients using ADAMTS13-act-ELISA (Kainos, Tokyo, Japan).
The PLASMIC Score, PT-INR, D-Dimer, FDP, FDP/D-Dimer Ratio, LD, and LD/Hb Ratio The PLASMIC score was calculated using admission data.The study aimed to determine the usefulness of the PLASMIC score in distinguishing TTP from septic DIC, without identifying the presence of schistocytes, and to investigate whether D-dimer, fibrin/ fibrinogen degradation products (FDP), FDP/D-dimer ratio, prothrombin time-international normalized ratio (PT-INR), lactate dehydrogenase (LD), and LD/hemoglobin (Hb) ratio levels could aid in differentiating between TTP and DIC.Differences between the two groups were assessed using a t-test.Because the receiver operating characteristic (ROC) analysis can define the optimal cutpoint value as the value whose sensitivity and specificity are the closest to the value of the area under the ROC curve and the absolute value of the difference between the sensitivity and specificity values is minimum, ROC analysis was conducted to determine the optimal cutoff points for each measurement in predicting TTP.Statistical analyses were performed using Easy R (EZR) version 1.3.6. 11
Patients
The study included patients with septic DIC (n = 32) and TTP (n = 16).All TTP patients had ADAMTS13 activity levels below 0.5% and positive ADAMTS13 inhibitor titers ranging from 0.6 to 28.7 BU/mL (Table 1).
PLASMIC Score
The PLASMIC scores ranged from 2 to 5 points for septic DIC patients and from 4 to 7 points for TTP patients (Figure 1).Among the total of 48 patients, 11 had PLASMIC scores of 6 or 7, indicating a high-risk group, and all of them were TTP patients.When dichotomized into high risk (scores 6-7) and intermediate-low risk (scores 0-5), the PLASMIC score predicted TTP with a sensitivity of 0.75 [95% confidence interval: 0.48 to 0.93], specificity of 1.00 [95% CI: 0.84 to 1.00], positive predictive value (PPV) of 100% [95% CI: 64.0 to 100.0], and negative predictive value (NPV) of 88.9% [95% CI: 73.9 to 96.9].This implies that patients with a PLASMIC score of 6 or 7 can be diagnosed as having TTP rather than septic DIC, without confirming the presence of schistocytes.When dichotomized into intermediate-high risk (scores 5-7) and low risk (scores 0-4), the PLASMIC score demonstrated a sensitivity of 0.94 [95% CI: 0.70 to 1.00], specificity of 0.75 [95% CI: 0.57 to 0.89], PPV of 65.2% [95% CI: 42.7 to 83. 6], and an NPV of 96.0% [95% CI: 79.6 to 99.9].However, among the cohort of 16 patients diagnosed with TTP, a total of 3 individuals demonstrated a PLASMIC score of 4 or 5 points.In contrast, within the group of 32 patients diagnosed with septic DIC, a substantial 19 patients exhibited an identical PLASMIC score.Consequently, relying solely on the PLASMIC score becomes challenging in practical settings to differentiate TTP patients from those with septic DIC, especially when the PLASMIC score indicates a value of 4 or 5.
D-Dimer, FDP, and FDP/D-Dimer Ratio
The mean value of D-dimer in patients with TTP was 7.11 μg/ mL, which was not significantly lower than that observed in patients with septic DIC (36.
PT-INR
The mean value of PT-INR in patients with TTP was 1.06, significantly lower than that observed in patients with septic DIC
LD and LD/Hb Ratio
The mean value of LD in patients with TTP was 1275 U/L, which was significantly higher than that observed in patients with septic DIC (440 U/L, t < 0.001) as shown in Figure 2e.).Moreover, we calculated and compared the LD/Hb ratio between TTP patients and those with septic DIC.The average of the LD/Hb ratio for TTP patients stood remarkably higher at 173, a significant contrast to that of DIC (39.0, p < 0.001), as illustrated in Figure 2f.ROC analysis indicated an AUC of 0.96 [95% CI: 0.91 to 1.0] for the LD/Hb ratio in TTP diagnosis (Table 2), with an impactful cutoff point of 53.7 (IU/10 g) (sensitivity 0.94 [95% CI: 0.70 to 1.00], specificity 0.91 [95% CI: 0.75 to 0.98], PPV 83.3 [95% CI: 58.6 to 96.4], NPV 96.7% [95% CI: 82.8 to 99.9]).Thus, the LD/Hb ratio demonstrated superiority in differentiating TTP from septic DIC.Remarkably, among the 16 patients diagnosed with TTP, all but one displayed an LD/Hb ratio equal to or exceeding 53.7 (IU/10 g).The exception, a truly exceptional case, exhibited an LD/Hb ratio of 43.3 (IU/10 g), alongside a PLASMIC score of 5, ADAMTS13 activity below 0.5%, an inhibitor titer of 2 BU/ mL, and a platelet count of 3.0 × 10 4 /µL, indicative of a severe manifestation of TTP.Nonetheless, this particular patient presented no signs of renal dysfunction, showed low levels of hemolysis throughout the disease course, and even had only slight positivity in urine occult blood during the diagnostic phase before treatment, which might elucidate the relatively modest values of LD and LD/Hb ratio.Hence, we confidently deemed an LD/Hb ratio of 53.7 (IU/10 g) or higher as the decisive threshold for TTP diagnosis.
Intriguingly, all septic DIC patients with a PLASMIC score of 5 exhibited an LD/Hb ratio lower than 53.7, strongly suggesting the potential use of the LD/Hb ratio alongside the PLASMIC score to discriminate between TTP and septic DIC patients.Furthermore, within the septic DIC cohort, the three patients displaying LD/Hb ratios surpassing 53.7 (IU/10 g) manifested serum creatine kinase (CK) levels exceeding 1000 IU/L, while none of the TTP patients demonstrated CK levels surpassing 1000 IU/L (Table 1).
Discussion
In the realm of practical clinical scenarios, differentiating between patients with TTP and those afflicted with septic DIC often poses a challenging task.This difficulty arises due to the unavailability of immediate results for ADAMTS13 activity and the presence or absence of schistocytes in peripheral blood, particularly during emergency situations and nighttime hours.However, a rapid differential diagnosis is necessary to enable prompt plasma exchange to save the lives of patients with TTP.The primary distinction between septic DIC and TTP lies in the fact that septic DIC patients have serious infections and fibrin thrombi throughout the body, whereas TTP patients have no serious underlying disease and platelet thrombi throughout the body.However, identifying the presence of a serious infection is sometimes surprisingly difficult in actual clinical practice as it may be overshadowed by other potential causes such as connective tissue diseases or malignancies.Moreover, the early stages of a severe infection can manifest with TTP-like pathological features.Intriguingly, an experimental porcine model of septic DIC induced by intraperitoneal injection of lipopolysaccharide revealed that as early as 12 h after injection, platelet thrombi, rather than fibrin clots, were frequently observed in the kidneys, effectively simulating a TTP-like condition. 12After that sustained stimulation to vascular endothelium and macrophages by lipopolysaccharide may consequently trigger the coagulation system and lead to DIC.These experimental findings highlight the occasional difficulty, even at the pathological level, in distinguishing between septic TTP and septic DIC.Moreover, surprisingly, the D-dimer values, indicating fibrin thrombus formation, in patients with TTP in actual clinical practice usually are all elevated than normal as shown in Table 1.Considering the above, we decided to investigate whether the PLASMIC score, which was developed to select TTP patients among TMAs that have more than 1% of schistocytes and platelets less than 150,000/μL, would be useful in differentiating TTP patients from septic DIC patients.
Several diagnostic criteria exist for septic DIC, including those formulated by the International Society on Thrombosis and Haemostasis (ISTH), 13 the Japanese Society of Thrombosis and Hemostasis (JSTH), and the Japanese Association for Acute Medicine (JAAM). 9,10In our study, we employed the diagnostic criteria established by JAAM, as it is specifically designed to detect DIC at an earlier or milder stage compared to the criteria of ISTH and JSTH.Consequently, it may have the potential to identify abnormal coagulation patterns resembling TTP in the early phase of septic DIC, which may present a condition that can make us hesitate to distinguish between the two in clinical practice.
Among the cohort of 16 patients diagnosed with TTP, a total of 13 individuals were classified within the high PLASMIC score category, exhibiting a score of 6 or higher.Strikingly, none of the patients affected by septic DIC were found to belong to this specific group.This observation suggested that the positive predictive value is 100%, indicating the possibility that the patient has TTP when a patient shows a PLASMIC score of 6 or higher.Importantly, these findings highlight that employing a PLASMIC score of 6 or 7, without necessitating the evaluation of schistocytes, not only enables the identification of TTP patients within the spectrum of TMAs but also allows for the reliable differentiation between TTP and septic DIC cases.However, it is worth noting that a notable proportion of septic DIC patients (19 out of 32 cases) and TTP patients (3 out of 16 cases) presented with a PLASMIC score of 4 or 5.This poses a considerable challenge in accurately distinguishing between these two groups when solely based on the PLASMIC score alone.
In addition to the PLASMIC score, which measurements are useful in differentiating TTP from septic DIC was examined.All measurements except D-dimer and FDP were significantly different between the two groups.Although Vincent et al 14 reported that abnormal coagulation profile including the D-dimer level can differentiate DIC from TMAs; surprisingly, none of the TTP patients had D-dimer within normal limits.Therefore, it was challenging to differentiate between TTP and DIC based solely on the presence or absence of elevated D-dimer or FDP levels.However, it was considered that when the D-dimer level was above 9.8 μg/mL or the FDP level was above 22.8 μg/mL, the likelihood of TTP was low.Additionally, the FDP/D-dimer ratio was calculated and compared between the two groups.This approach was motivated by the understanding that during hyperfibrinolysis, the FDP/ D-dimer ratio tends to increase. 15Consequently, it was hypothesized that plasminogen activator inhibitor 1 secreted from an injured vascular endothelium caused by sepsis in patients with septic DIC would suppress fibrinolysis 16 and exhibit a lower FDP/D-dimer ratio, providing a means for discrimination between the two groups.Indeed, the FDP/D-dimer ratio was significantly higher in the TTP group, and no TTP cases were observed with an FDP/D-dimer ratio below 1.93.Therefore, an FDP/D-dimer ratio below 1.93 was considered indicative of the absence of TTP.The PT-INR showed a greater AUC and a higher negative predictive value than this FDP/D-dimer ratio indicated that a PT-INR exceeding 1.07 rendered it improbable for a patient to have TTP.This observation suggests that despite the presence of elevated D-dimer levels in individuals with TTP, signifying the occurrence of fibrin thrombus formation, the extent of fibrin thrombus formation in TTP patients may be insufficient to precipitate a consumptive reduction in coagulation factors which prolong PT-INR.
There are various reports on LD levels in patients with TTP, and LD levels are considered very important in TTP.One is that the mortality rate increases when LD levels are 10 times higher than normal. 17Another is that substituting LD for the hemolysis indicator in the PLASMIC score has been reported as diagnostically useful, albeit with lower specificity. 18Moreover, Zhao et al 19 considered that LD might be a potent element for the early diagnosis of TTP; however, they did not ascertain the usefulness of LD values in differentiating TTP from septic DIC.The present study demonstrated that the LD value showed greater AUC than D-dimer, FDP, FDP/D-dimer ratio, or PT-INR to identify TTP patients.Furthermore, the LD value has higher sensitivity and specificity, so that patients with LD values below 554 IU/L did not seem to be TTP patients.LD is present in various tissues, including the heart, red blood cells, liver, kidneys, brain, lungs, and skeletal muscles, and is elevated in many diseases.But elevated LD in patients with TTP may be due to hemolysis.Hence, in order to obtain a more representative measure of hemolysis severity in TTP, we employed the ratio of LD to Hb levels.As can be seen in Figure 1f, the LD/Hb ratio values showed the least overlap between TTP and septic DIC patients, and the ROC analysis showed a surprisingly high AUC of 0.96 in Table 2, which was considered excellent for selecting TTP patients.At a cutoff point of 53.7 IU/10 g, the sensitivity and specificity were 0.94 and 0.91, respectively.Furthermore, the NPV of the LD/Hb ratio for TTP was remarkably high at 96.7%, suggesting that an LD/Hb ratio below 53.7 IU/10 g effectively rules out TTP.The PLASMIC score incorporates the presence or absence of hemolysis, but as a measure of the degree of hemolysis, the LD/Hb ratio may provide additional information for a TTP diagnosis.Notably, the LD/Hb ratio was 53.7 IU/10 g or greater in 15 out of 16 TTP patients (all but 1 exceptional case mentioned in the results), whereas only 3 septic DIC patients with PLASMIC scores of 3 or 4 showed LD/Hb ratios above 53.7 IU/10 g.The cause of the elevated LD/Hb ratio in these septic DIC patients is likely due to rhabdomyolysis because they had an increased level of CK, which is common in sepsis, 20 rather than hemolysis.Furthermore, the CK levels in these patients were over 1000 IU/ml, suggesting that if the cause of the elevated LD level is due to rhabdomyolysis, the condition may require an elevated CK above 1000 for the LD/Hb ratio to be above 53.7 IU/10 g.
From the above, we believe that the PLASMIC score and LD/Hb ratio can be used together to differentiate patients with TTP from those with septic DIC.In essence, irrespective of the presence or absence of schistocyte, when a patient exhibits a PLASMIC score ranging from 6 to 7, they can be classified as suffering from TTP.Moreover, even with a score falling between 4 and 5, if the LD/Hb ratio surpasses 53.7 IU/10 g and CK levels remain within the normal range, there is a strong likelihood of TTP manifestation.Naturally, by consulting other PT-INR values and FDP/D-dimer ratios as well, we can confidently ascertain the presence of TTP in patients as shown in this study.
There are several limitations to this study.The sample size is small and it is a retrospective study.Future prospective studies with larger sample sizes are needed.Although the septic DIC patients in this study had a variety of underlying diseases, LD and Hb may vary with an underlying diseases and should be investigated in septic DIC patients with more underlying diseases.However, in contrast to assessing ADAMTS13 levels or identifying the presence of schistocytes, the PLASMIC score and certain aforementioned laboratory test outcomes, notably the LD/Hb ratio, can be readily obtained from any clinical laboratory.These measurements hold the potential to distinguish TTP patients from those with septic DIC and encourage for prompt plasma exchange or caplacizumab administration. 21
Conclusion
The combination of the LD/Hb ratio with the PLASMIC score may be useful to distinguish between TTP and septic DIC and to identify patients with TTP who need rapid plasma exchange.
Figure 1 .
Figure 1.PLASMIC scores of patients with septic DIC or TTP.The open bar represents the distribution of the PLASMIC score in septic DIC patients, and the closed bar represents the distribution of the PLASMIC score in TTP patient.Abbreviations: DIC, disseminated intravascular coagulation; TTP, thrombotic thrombocytopenic purpura.
Figure 2 .
Figure 2. Distribution of the values of D-dimer (a), FDP (b), FDP/D-dimer ratio (c), PT-INR (d), LD (e), and LD/Hb ratio (f) in patients with septic DIC and those with TTP.The number written at the top of the figure for each measurement is the mean value for each group.The long horizontal lines in each figure are the cutoff values for identifying TTP patients for each measurement.D-dimer and FDP were not significantly different between septic DIC and TTP, but FDP/D-dimer ratio, PT-INR, LD, and LD/Hb ratio were significantly different between the two groups.The LD/Hb ratio values showed the least overlap between the two groups.Abbreviations: FDP, fibrin/fibrinogen degradation products; LD/Hb, lactate dehydrogenase/hemoglobin; TTP, thrombotic thrombocytopenic purpura; DIC, disseminated intravascular coagulation; AUC, area under the curve; ROC, receiver operating characteristic.
Table 1 .
Characteristics of Patients with TTP and Those with Septic DIC.
Table 2 .
Performance Metrics of Various Measurements for Identifying TTP.
|
2023-10-21T06:18:17.802Z
|
2023-01-01T00:00:00.000
|
{
"year": 2023,
"sha1": "d0cbee8fbb19595d464689351e93789acdf08623",
"oa_license": "CCBYNC",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "36fe4255d3fb9b7bd67e43bd3a778176dbb83bb3",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
237788395
|
pes2o/s2orc
|
v3-fos-license
|
Documentation of veterinary practices from Gujjar and Bakarwal tribes of District Poonch, Jammu & Kashmir: A boon for animals from our ancestors
Background: Gujjar and Bakarwal tribal communities are a treasure trove of traditional veterinary knowledge as they have been using plants to keep their livestock healthy and free from diseases for centuries. However, this knowledge is declining day by day due to several factors. The present study was aimed at surveying and documenting the medicinal plants used traditionally by the tribal communities of Gujjar and Bakarwal in the Poonch district of Jammu and Kashmir (J&K), India to treat livestock ailments. Methods: A systematic ethnobotanical survey was conducted in 12 villages between July 2018-March 2020. Data was gathered from the local inhabitants using semi-structured questionnaires and analyzed quantitatively using use-value (UV), relative frequency of citation (RFC), informant consensus factor (ICF) and fidelity level (FL). Results: A total of 31 medicinal plant species belonging to 30 genera of 24 families, with herbs as the dominantly used plant habit (70.97%) were encountered. Roots were most frequently used for remedy preparation (35.14%) followed by leaves (32.43%), with oral administration as the main application mode. Use-value and Relative frequency of citation ranged from 0.03-0.72 and 0.03-0.48 respectively. Based on these values, Rumex nepalensis was found to be the most important ethnoveterinary species used. The reported Informant Consensus Factors were very high (0.81-1.00), indicating a very broadly spread knowledge about ethnoveterinary plants in the communities. The use category with the greatest number of plant species (10 spp.) was gynecological standardize active principles which can further lead to the development of more efficient veterinary medicines.
Background
Ethnoveterinary knowledge is a holistic body of folk beliefs, skills, knowledge, experience, and practices employed by indigenous and local communities for curing ailments of livestock. This knowledge varies across countries, regions, and communities (McCorkle 1986, Xiong & Long 2020. This traditional knowledge and practices often a century old tradition (Dutta et al. 2021a, Gurib-Fakim 2006 and continue to be important especially in rural communities across the world as they are useful, readily available, with minimal side effects, and provide a sustainable and low-cost alternative to allopathic drugs (McCorkle & Green 1998).
In India, ethnoveterinary medications have been used since ancient times (Sikarwar & Tiwari 2020). Vedic literature, particularly Atharvaveda is a repository of traditional medicine that includes prescriptions to treat animal diseases. These ethnoveterinary traditions form an integral part of family life and have an important religious and economic social role. Local communities know the principles, operations, and skills of the administration of remedies for livestock, which contributes in many respects to the socioeconomic growth of the rural population. The livestock sector has the potential to generate job opportunities, especially for marginal and small-scale farmers and landless workers, who own about 70% of the country's livestock. Communities living in rural areas and far away from towns and cities depend on plant-based medicines for common diseases, and the usage of medicinal plants for the treatment of diseases is a common practice (Fayaz et al. 2019. Ethno-veterinary knowledge is often conveyed mostly through oral transmission (Aziz et al. 2020, Dutta et al. 2021b, Nfi et al. 2001). This traditional oral knowledge is declining due to improper documentation, the death of elder members of the tribe or community, rapid modernization, and lack of interest of the younger generation towards traditional practices (Bhatia et al. 2014, Idu et al. 2011, Kala et al. 2006. The Union Territory of Jammu & Kashmir (J&K) is part of the Indian Himalayan region lying in the lap of Western Himalayas and its indigenous communities rely heavily on traditional phytotherapies and traditional healers . Literature reveals various studies that have documented the traditional ethnoveterinary knowledge from J&K (Dutta et al. 2021a, Beigh et al. 2003, Khuroo et al. 2007, Rashid et al. 2007, Sharma & Singh 1989, Shah et al. 2015. However, studies on the ethnoveterinary uses of plant are lacking in the district Poonch (J&K). Given this gap, an attempt has been made to document and describe various cattle diseases and their remedies practiced by Gujjar and Bakarwal communities. We hypothesized that due to the remote location these communities would have well preserved ethnoveterinary knowledge that would differ from other parts of the region.
Study Area
District Poonch is one of the remote districts of Union Territory of J&K at an altitudinal range of 800-4750 masl. It lies between 73º 58'-74 º 35' E longitude and 33º 25'-34º 01' N latitude. Situated on the southern foothills of Pir Panjal range of J&K, it is bordered by Kashmir in the northeast, Rajouri in the south and Pakistan occupied Jammu Kashmir (PoJK) in the west (Fig. 1). The area experiences a sub-tropical to temperate climate regime with an average temperature of 30°C during the summer, while winter months record on the average temperature of 2°C. The administrative area is distributed over 6 tehsils and 11 blocks comprising 178 villages and 51 panchayats. The total population is 475835 (Census 2011) in a total geographic area of 1674 km 2 . About 96% of the population lives in isolated villages (Mughal et al. 2017).
Most of the rural population depends on agriculture and animal husbandry for their livelihoods. Gujjars and Bakarwals constitute the majority of the population. While Gujjars are semi-nomadic, the Bakarwals are truly nomadic communities. There is a huge dependence of these communities on the local flora for their basic needs, as they rear livestock on high pastures, lacking any modern conveniences (Shah et al. 2015). The district is rich in terms of biodiversity harboring many rare, endemic and threatened plants. The
Survey and data collection
An extensive and systematic field exploration was conducted in the study area from July 2018 to March 2020 for the collection of plants that are being used by the local inhabitants to treat diseases in livestock. A total of 58 randomly selected participants, including traditional healers, shepherds, milkmen, elder people, and others belonging to Gujjar and Bakarwal tribes were interviewed using a semi-structured questionnaire. All interviews were conducted in local language, after receiving prior informed consent from the participants. In the interview's utmost attention was given to local veterinary knowledge of both wild as well as cultivated medicinal plants.
The medicinal plants were photographed, and vouchers of all species collected from the study area, and GPS coordinates of each voucher were recorded. The collected plants were dried, mounted on herbarium sheets, and identified in the herbaria of the Department of Botany, University of Jammu (HBJU), Jammu and Janaki Ammal Herbarium, Indian Institute of Integrative Medicine (RRLH), Jammu, and with the help of various regional floras (Sharma & Kachroo 1983, Singh & Kachroo 1994. For the latest accepted names and nomenclatural position of the taxa, Plants of the World Online was followed (POWO 2019). The voucher specimens were deposited at the herbarium, Department of Botany, University of Jammu, Jammu, J&K, India.
Data analysis
The interview data obtained were analyzed quantitatively using four indices viz., use-value (UV), Relative Frequency of Citation (RFC), Informant Consensus Factor (ICF), and Fidelity Level (FL%).
Use-value (UV)
Use-value (UV) was calculated to elucidate the types of uses associated with particular species and their relative importance to the participants (Philips et al. 1994) as: where U refers to the number of use-reports cited by each informant for that plant species and n is the total number of the participants interviewed. When the plant is important, it has high use reports and high Use-value, and vice-versa.
Relative frequency of citation (RFC)
Relative frequency of citation (RFC) was used to determine the level of traditional knowledge about the use of ethnoveterinary plants in the study areas (Tardío & Pardo-De-Santayana 2008) using:
RFC = Fc/N
where Fc is the number of participants who mention the use of the plant and N is the number of participants that have participated in the survey.
Informant consensus factor (ICF)
To determine the homogeneity of uses for a particular plant species, all the diseases of the livestock were broadly classified into 9 categories and the Informant consensus factor (Heinrich et al. 1998) was calculated using the following equation: where nur is the total number of use-reports of a category, and nt is the number of species used for the category. ICF values approach 1 when there is the exchange of knowledge among the participants and is near 0, when there is no exchange of knowledge among the participants and chose plants randomly.
Fidelity level (FL)
FL compares fidelity of a species to one ailment vs. being reported for many and allows us to know whether this is a general or specific treatment (Friedman et al. 1986), Fidelity level was calculated using: Where Np is the number of use-reports for a given species for a particular ailment category and N refers to the total number of participants stating the plant useful for any ailment category.
Demographic description of participants and collection sites
The present study successfully documented the plant species with ethnoveterinary importance in the study area In our survey, a total of 58 participants belonging to the age group between 25-84 were interviewed for the documentation of traditional veterinary knowledge in twelve villages of district Poonch (Table 1). Most of the participants were men (44, 75.86%), and the remaining 14 (24.14%), were women. All these participants were from the tribal community of Gujjar and Bakarwal of the study area. The majority of them belong to the age group 45-74 years. Most participants had received little education, i.e. up to primary standard. The number of participants below 45 years was limited, and older participants contributed the major portion of the knowledge, indicating their higher level of ethnoveterinary knowledge. We could only interview a very limited number of female participants due to cultural restrictions given that all interviewers were male. (Galav et al. 2013). The recent review article of Sikarwar and Tiwari (2020) reported 270 plant species of 84 families used by rural tribes and Central India people for ethnoveterinary practices. The root is mixed with salt and mirch and given to horses to cure stomach pain (2) 2 0.03 0.03 The bulb is powdered and given orally to animals to treat snake bite (5) Underground part is powdered and given orally to cattle to cure Pyrexia (8). A paste of same powder is applied on the affected part to treat snakebite (6). 14 0.24 0.14 The stem bark is dried and powdered. This powder is used externally to treat maggots in wounds (5). Outer bark of the root is dried, powdered and a paste is prepared and applied on wounds in cattle (7). Root bark powder is also given orally in small doses to treat Fractures (8 Maize flour is given with water to treat Foot and mouth disease (FMD) in cattle (3) In the present study, the maximum plant species used were herbs (70.97%), followed by trees and shrubs (12.9% each), and climbers (3.23%), indicating that herbs are the primary source of ethnoveterinary medicine for the Gujjar and Bakarwal tribes in the region, which is in line with the earlier studies conducted in J&K and India (Ahmad et al. 2017, Punjani & Pandey 2015. Studies in lower areas reported trees to be most frequently used for veterinary purposes (Nigam & Sharma 2010, Rajakumar & Shivanna 2012.
Therapeutic values
Livestock is an integral part of tribal communities of Gujjar and Bakarwal and plays an important social and economic role in their life. Ethnoveterinary medicine is an integral part of daily life and applied especially for the treatment of pneumonia, pyrexia, constipation, as vermicide, stomach pain, accouchement, galactagogue, inability to inseminate, prolapse, wounds, retention of placenta, general weakness, fractures, hemorrhagic septicemia, maggots in wounds, body pain, cold, cough, and snakebite. Most of these ailments belong to gynecological/ andrological, dermatological, gastrointestinal, and liver-related issues. The animals found to be treated in the present study were buffalo, ox, horse, cows, sheep, and goat.
The majority of plant species in the present investigation were used to cure a single disease only, suggesting species' usefulness and reliability in this cultural context for a specific treatment purpose. A herbal preparation of Acorus calamus and Ulmus villosa with fruits of Capsicum annum and seeds of Trigonella sp. was the only herbal mixture applied. Otherwise the local communities used only single species treatments. The reported plant species were either cultivated or collected from the forest, providing cost-effective treatment to the cattle compared to the modern drugs.
Use-value (UV) and Relative Frequency of Citation (RFC)
Relative frequency of citation showed the maximum used therapeutic plants used by the local population. The dominant species in the study area were Rumex nepalensis (RFC=0.48), Bistorta amplexicaulis (RFC=0.34), Urtica dioica (RFC=0.21), and Skimmia laureola (RFC=0.19), as the maximum number of participants cited these. In the present study, RFC ranged from 0.48-0.03 (Table 2). Based on use-value (UV), which takes the diversity of uses into account, the most important plant species reported were Rumex nepalensis (UV=0.72), followed by Skimmia laureola (UV=0.43), Berberis lyceum, and Bistorta amplexicaulis (UV=0.34 each), and Ranunculus bulbosus (UV=0.31). Aconitum violaceum and Acorus calamus, with minimum use-value (UV=0.03 each), were the least used species in the study area.
Informant Consensus Factor
The ailments reported from the study area were classified into nine (9) different ailment categories. The highest consensus of the participants was obtained for the treatment of Physical pains (ICF=1), followed by Miscellaneous disorders (ICF=0.94), Muscular-skeletal disorders (ICF=0.93), Respiratory (ICF=0.92), and Dermatological and gynecological/ andrological disorders (ICF=0.9 each). The documented ICF value in the present study is in line with the previous report of Sharma et al. (2012) who reported the highest ICF for urological disorders (0.95) and lowest for nutritional diseases (0.80). In another study, ICF ranged from 0.75-0.95 (Meen et al. 2020) with higher values for respiratory, gastrointestinal, and reproductive. The ICF values were mostly on the higher side in the present study, which suggests that the participants share the information widely. The maximum number of plant species used for treating he Gynecological/Andrological disorders was 10 (with 88 use reports), followed by Fever (8 species), Gastrointestinal disorders and Snakebite (5 species each), Dermatological disorders (4 species), Miscellaneous and Respiratory disorders (3 species each). (Table 3). Primula denticulata Sm. Primulaceae 5 5 100.00
Comparison of traditional uses with previous studies
The comparison with available literature showed novel uses of ethnoveterinary plants in the study region. The present study reported using the root powder of Achillea millefolium given to cattle to treat snakebites (Table 2). A previous study from district Rajouri (J&K) reported the use of shoots and leaves for urinary disorders in cattle (Jamwal & Kant 2008). In Himachal Pradesh, the dried powder of the whole plant was given orally with hot water to treat wounds, skin allergy, and sunburn (Radha et al. 2020). The root powder of Aconitum violaceum was given to buffalos, oxen, and horses in case of snakebite (Table 2). We could find no other ethnoveterinary uses of this species from India.
As per the study area respondents, the root powder of Acorus calamus is given orally to treat stomach pain in horses. Previous studies reported many other ethnoveterinary uses for different parts of this plant. The tribal and rural communities in Uttar Pradesh use the leaf paste and rhizome powder of A. calamus to treat wounds in animals (Gautam et al. 2015). An amalgamation of rhizome powder of A. calamus and Artemisia scoparia prepared with Brassica campestris (mustard) or Sesamum indicum (sesame) is used by the indigenous oil people in Himachal Pradesh for massage therapy in case of fever, joint pain, and arthritics in livestock (Bhatti et al. 2017). A study from the Shivalik Hilly zones of Himachal Pradesh reported the ethnoveterinary use of A. calamus rhizome powder to treat epilepsy, urinary problems, hydrocele, and as anthelminthic (Kumar & Chander 2018). Similarly, the leaves, roots, and the whole plant of A. calamus is used to treat various gastrointestinal issues in sheep, cows, buffalos, and goats in the West's Darjeeling subdivision Bengal and district Doda of Jammu and Kashmir (Khateeb et al. 2015, Mondal 2012. The bulb powder of Allium cepa was given orally to animals to treat snakebites in the study area ( Table 2). The people in the Bandipora district of Jammu and Kashmir used soft balls of crushed bulbs of A. cepa with salt as a remedy against cold and anorexia in cattle, and in cows to stimulate the estrus cycle. These balls are also given to horses to cure frothy bloat caused due to the grazing of (Trifolium repens) (Bhardwaj et al. 2013). An oral intake of 100g paste of Allium cepa has also been reported to alleviate cattle swelling (Jamwal & Kant 2008). The mixture of powdered bulbs of Allium cepa with black salt is given along with water to cure foot and mouth disease in cattle in Hassan District of Karnataka (Kumar & Nagayya 2017).). The mixture of A. cepa bulb with black salt and water is given to cows, buffalos, oxen, goats, and sheep by the traditional herbal healers in Uttarakhand to remedy poisoning (Phondani et al. 2010). In Orissa, the bulb paste of A. cepa is reported to cure fever (Satapathy 2010), whereas the tribal societies in Rajasthan a decoction of the whole plant is given orally to sheep and goats as a tonic and febrifuge (Meen et al. 2020).
The bulbs of Allium sativum were powdered and given with milk and ghee to cure pyrexia (Table 2). A previous study reported garlic used to treat diarrhea in sheep, cows, goats, buffaloes (Khateeb et al. 2015). In the Kalakote range of Jammu and Kashmir, a bulb paste of A. satium and curd is given to female buffalos and is considered an aphrodisiac (Jamwal &Kant 2008). A paste of the bulb is administered once daily for five days to treat cough in Andhra Pradesh (Pragada & Rao 2012). In Haryana, the oral intake of garlic and elaichi mixed with jaggery is reported to cure cold and fever (Yadav et al. 2014). Many ethnoveterinary properties such as efficacy against cough and cold, bronchitis, brain disease, earache, indigestion, food poisoning, diarrhea, injuries, snake bite have been reported from Central India, with the population using the juice of bulbs of A. satium or the bulbs in multiple combinations with mustard oil or mustard oil and ash of cow dung, or the bulb paste and beeswax as well as the bulb paste, milk, and cooking oil (Sikarwar & Tiwari 2020). A paste prepared by mixing the bulbs of A. sativum with the bark of Oroxylum indicum and Terminalia bellirica in rice-soaked water is used to treat black quarter disease in cattle in Karnataka (Rajakumar & Shivanna 2012). In the Marwar region of Rajasthan, the stem of Allium sativum is mixed with flowers of Punica granatum and milk and used against gastrointestinal infection (Meen et al. 2020). The animal owners and housewives in Uttarakhand use A. sativum for various ethnoveterinary uses such as food poisoning, tympany, sterility, skin infection, arthritis, internal parasites, foot mouth disease, stomachache (Tiwari & Pande 2010). Anthelmintic properties of garlic have been reported for from West Bengal (Saha et al. 2014).
The underground parts of Arisaema jacquemontii were powdered and given orally to cattle to cure pyrexia and snakebite in the study area. No ethnoveterinary uses have been reported in literature for this plant species. The powder of stem bark and paste prepared from the outer bark of roots Berberis lycium was used externally to treat wounds, whereas the oral decoction of the root is given to treat fractures in cattle (Table 2). In contrast, the root decoction of B. lycium was previously reported to treat jaundice in cows, goats, and buffaloes from the Doda district of J&K (Khateeb et al. 2015). The bark of B. lycium was also used to treat foot and mouth disease of cattle in Western Himalaya (Shoaib et al. 2020). The present study reported the use of rhizomes of Bistorta amplexicaulis as a galactagogue. The residue left after extracting the seed oil from Brassica rapa, locally known as 'Khal' was also used as a galactagogue ( Table 2). The seed oil B. rapa, in combination with the paste of bulb of Allium cepa has previously been reported for treating wounds in Madhya Pradesh (Singh & Sudip 2014).
The leaf of Calotropis procera was used to treat hemorrhagic septicemia and swellings in the study area (Table 2). The literature survey found various other ethnoveterinary properties for this plant species. The people in the tribal regions of Andhra Pradesh apply the milky leaf latex of Calotropis procera on inflamed areas to relieve inflammation, and on snake bite to neutralize the poison (Pragada & Rao 2012). People in Central India use the roots, leaves, and flowers of this plant species either in powder form or in combination with milk or mustard oil to treat bone fractures, tumors, foe the healing of wounds, swellings, conjunctivitis, earache, skin diseases, urine retention, to ease delivery, for snake bite, indigestion, diarrhea and dysentery, stomachache, and falling of the tail (Sikarwar &Tiwari 2020). The leaves and leaf latex have been reported to remove intestinal worms in sheep, act as a galactagogue, and are employed in the detachment of the placenta after delivery (Yadav et al. 2014). The indigenous people in Himachal Pradesh apply the milky leaf latex on the bitten part of the body to neutralize the snake poison and dog bites (Bhatti et al. 2017).
The present study documented the use of the whole plant powder of Cannabis sativa given orally to treat body pains in cattle whereas balls made from powdered leaves, locally known as 'Peda' were given to cattle to treat intestinal worms ( Table 2). The Karbi tribe in Assam and the vaidyas, hakims, sadhus, and tribal people in the Jhansi district of Uttar Pradesh have been previously reported to use the leaf and leaf mixture of C. sativa with whey and water to treat diarrhea in animals (Kumar et al. 2020, Nigam & Sharma 2010). Using the leaves and seeds of C. sativa people in the Shivalik Hills of Himachal Pradesh were reported to treating reddishness, cough, cataract, urinogenital disorders (Kumar & Chander 2018). Pastoralist communities in Jammu and Kashmir use the whole plant powder for improving poor reproductive performance in cattle and buffaloes (Khateeb et al. 2017), while in the Kalakote range of J&K, the leaf powder is given orally for anorexia in cattle (Jamwal & Kant 2008). In Orissa, balls made from C. sativa and seeds of Cicer arietinum were given orally once a day against chronic dysentery in cattle (Satapathy 2010). In the Sikkim Himalaya stem pieces of C. sativa are fed to livestock to treat inflammation and act as a tonic to cattle (Bharati and Sharma 2012). In Uttarakhand, the traditional herbal healers apply the boiled leaves of Cannabis sativa with the ash of Pinus roxburghi and black salt externally to treat sprains in animals such as cow, buffaloes, sheep, goats, and dogs (Phondani et al. 2010).
The fruits of Capsicum annuum were given orally to treat pyrexia in cattle, and the mature fruits powder with buttermilk (lassi) were given to cattle to cure cough in the study area. The fruit paste has been reported earlier to be useful against foot and mouth disease in animals (Pragada & Rao 2012). The mixture if C. annum fruit and salt was reported by the pastoralists of J&K to be useful against endoparasites (Khateeb et al. 2017). In Jhansi District of Uttar Pradesh, the paste of seeds of Allium sativum, Piper nigrum, Cuminum cyminum, and alum is given to alleviate dullness in animals (Nigam & Sharma 2010). In Uttarakhand, the healers were reported to use the powdered mixture of the pod of C. annum and the bark of Zanthoxylum armatum to treat fasciolosis in buffaloes, cows, and oxen (Phondani et al. 2010). Other reported ethnoveterinary properties of fruits and stem of this plant species include hoof infection, skin disease, dog bite, wounds blisters, eczema, hemorrhagic septicemia, foot and mouth disease, and burns (Tiwari & Pande 2010).
The present study documented the anthelmintic property of the leaf juice of Clematis grata in cattle. No literature reports on ethnoveterinary uses of this species could be found for (Table 2). A paste prepared from the whole herb or roots of Cynodon dactylon was applied to cattle wounds in the study area. In J&K, the whole plant is generally given as feed, and the plant paste made with water is applied to the pelvic region to treat the problem of oliguria in cows, buffaloes, sheep, and goats (Khateeb et al. 2015). In Andhra Pradesh, the whole plant of C. dactylon is known as 'Garika' and mixed with pepper along with toddy and given orally twice a day for one week to treat rheumatism in cattle, buffaloes, goats, and sheep (Ramana 2008), whereas in Assam it is used to treat vomiting in goats, pigs, and cow (Kumar et al. 2020). Ethnoveterinary uses as galactagogue and to treat conjunctivitis were also reported previously from Central India (Sikarwar & Tiwari 2020), and use for gastric troubles, bone fracture, sprains, mastitis, and clotting of internal blood injury has been reported from Uttarakhand (Pande et al. 2007).
The leaf juice of Ficus carica was used to expel worms from wounds in cattle. The tribal communities in Todgarh-Raoli Wildlife Sanctuary of Rajasthan use the latex of Ficus carica for treating eczema and carbuncles in animals (Galav et al. 2013). The ethnic tribal communities in Darjeeling District of West Bengal use the leaves and fruits of F. carica to treat diabetes and gastric problem in domestic animals (Mondal 2012).
The roots of the Geranium wallichianum were directly given to animals to cure pyrexia and as galactagogue. Previous studies in J&K reported the use of crushed fresh roots against weakness, inflammation of hooves, warts, and abscissions in cows (Bhardwaj et al. 2013, Khuroo et al. 2007, while the use to treat bone fractures and broken horns was reported from Uttarakhand (Pande et al. 2007, Phondani et al. 2010. The dried root powder of Girardinia diversifolia was given with milk to cattle to cure retention of the placenta, while the leaf paste was applied externally to treat wounds. The root paste of G. diversifolia has been previously reported to be used in pimples and boils in domestic animals in Uttarakhand (Tiwari & Pande 2006). The fresh leaves of Grewia optiva were given orally to treat retention of placenta in cows and buffaloes. In Uttarakhand the species is used to remedy throat infection, indigestion, dysentery, constipation, diarrhea, bone fracture, sprains, tonsils, pregnancy, and to increase lactation (Pande et al. 2007).
The leaf decoction Mentha longifolia made in tea was given to cattle to cure pyrexia. No such use reports have been found in literature. The roots of Phytolacca acinosa were given to animals in the study area to cure the inability to inseminate. A previous study from District Doda, J&K reported a powdered mixture from the whole plant with whey and milk given to cows, buffaloes, sheep, and goats to treat hematuria (Khateeb et al. 2015). The indigenous people in Himachal Pradesh use the leaves and twigs to treat cough, cold, and constipation in livestock (Radha et al. 2020), while the tribal communities in Uttarakhand give seeds orally to domestic animals to treat pneumonia, and leaves to treat fever (Juyal & Ghildiyal 2013, Tiwari & Pande 2006. Fever and joint pain in Yak were reported to be treated using the roots of P. acinosa by the Monpa tribe in Arunachal Pradesh (Maiti et al. 2013).
The pounded flowers of Primula denticulata were given orally to cattle to treat snakebite, and the same use has been reported earlier from Poonch district (Khan and Kumar, 2012).
The respondents gave the dried fruit rind powder of Punica granatum to cattle to treat uterus prolapse ( Table 2). The comparative literature included various other ethnoveterinary uses for other parts of this plant from J&K and other Indian states. In Jammu and Kashmir, the local inhabitants give the fruit paste and seeds of P. granatum orally to animals to treat urinary problems, hemorrhagic enteritis, and liver problems (Jamwal & Kant 2008, Khateeb et al. 2015. A paste prepared by mixing chopped leaves of P. granatum, root bark powder of Ficus religiosa, and Sesamum indicum oil has been reported from the rural women Banaskantha district of Gujrat to treat skin infections in animals (Khandelwal 2017). The indigenous people in the Garhwal Himalayan Region give the ground leaves of P. granatum to animals twice a day for three days to treat diarrhea (Bhatt et al. 2013).
The root powder and root decoction of Ranunculus bulbosus was given orally to cattle to treat pneumonia and to expel intestinal worms. This use was reported for the first time from India ( Table 2). The present study reported the use of powdered roots of Rumex nepalensis combined with buttermilk to treat general weakness and cough in cattle. The Gujjar tribe in Kashmir Himalaya was reported to prepare semisolid balls from the roots of R. nepalensis by boiling the root powder in milk along with salt, which then was given to newborn calves to protect them from juvenile infections (Khuroo et al. 2007). The mixture of roots of R. nepalensis and Piper nigrum has been reported to treat fever, tympany, and bloat in cows, buffaloes, sheep, and goats (Khateeb et al. 2015). This plant species has also been reported to treat diarrhea and dysentery from Uttarakhand (Pande et al. 2007).
According to the participants, the boiled leaves Skimmia laureola were given to cattle to cure pyrexia, while the leaf powder was mixed with milk and given to cattle to treat cold. The root paste was applied externally to treat fractures in animals also. In J&K, the oral administration of leaves twice a day for seven days has been reported previously to treat anemia in cows, buffaloes, sheep, and goats (Khateeb et al. 2015).
The use of leaves of Ulmus villosa to treat prolapse in cattle in the present study was for the first time reported from India (Table 2). Urtica dioica roots were given to cattle as a galactagogue. Pande et al. (2007) reported the ethnoveterinary use of this species from Uttarakhand to treat abdominal pain wounds, bone fracture sprains, hematuria, rheumatism, neck sore, lactation, and regulate fertility. The present study found the fresh leaves of Viburnum grandiflorum given orally to treat constipation, which has never before reported from India.
The leaves of Vitex negundo were given orally in the study area to cattle to cure pyrexia. In other studies the leaves were used to treat stomachache, reddening eyes, and diarrhea in milk-yielding animals and camels , Sharma & Manhas 2015. The leaves and twigs of this species were used as an appetizer and against mastitis in livestock Himal Pradesh (Sehgal & Sood 2013), while in Uttarakhand and Karnataka the same plant part was used as antidote against snake bites in animals (Harsha et al. 2005). The chiru tribe in Manipur uses the leaves to treat dermatitis in domestic animals (Rajkumari et al. 2014), antibacterial and anthelmintic properties in cattle, were reported from Tamil Nadu (Kiruba et al. 2006).
In the present study, flour of Zea mays was given with water to treat foot and mouth disease (FMD) in cattle, whereas previous studies from India showed the seeds and flour of Zea mays useful against constipation in livestock in the Hamirpur district of Himachal Pradesh (Sehgal & Sood 2013). reported the use of maize seeds as a galactagogue in milk-yielding animals from District Kathua of J&K. In Karnataka the stamina of the species are used to treat urinary inflammation in livestock (Kumar & Nagayya 2017), while in Andhra Pradesh's people use the corns of maize were reported for treating reproductive disorders (Pragada & Rao 2012).
Conservation perspective
Of the 31 reported species, only ten species (Allium cepa, Allium sativum, Brassica rapa, Capsicum annuum, Grewia optiva, Mentha longifolia, Prunus armeniaca, Punica. granatum, Vitex negundo, Zea mays have been brought under cultivation. There is great need for the cultivation and conservation of the remaining plant species to improve their sustainable use.
Conclusion
This current study's findings show the extent of information among the Gujjar and Bakarwal tribes living in the Poonch district of Jammu and Kashmir, India, regarding medicinal plants and their usefulness in livestock care. The congruence between the ethnoveterinary uses documented in the present study and the uses reported found in literature for most plant species supplement information on the traditional ethnoveterinary uses of the respective plants. The present study found no specific herbal remedy for the cows, buffaloes, oxen, and horsesthe same treatments were given to different animals. However, the dose of the preparation varied according to the animals' species and age. Proper scientific validation would be an essential step for the standardization and wider utilization of ethnoveterinary species, and the possible development of allopathic ethnoveterinary drugs from this resources. The lack of cultivation of most plant species in the study area is a concern and needs to be addressed by the relevant authorities so that proper initiatives could be developed.
|
2021-09-01T15:15:49.635Z
|
2021-06-17T00:00:00.000
|
{
"year": 2021,
"sha1": "a750d22e47f8cac7ee14ae71930ade1ca02cbc86",
"oa_license": null,
"oa_url": "https://ethnobotanyjournal.org/index.php/era/article/download/2787/1269",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "55c2fd26aac3c38b898c289ebb60cd8b1f2b1296",
"s2fieldsofstudy": [
"Environmental Science",
"Medicine",
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Geography"
]
}
|
256588074
|
pes2o/s2orc
|
v3-fos-license
|
Effect of Testosterone Undecanoate on Sexual Functions, Glycaemic Parameters, and Cardiovascular Risk Factors in Hpogonadal Men with Type 2 Diabetes Mellitus
Aims: To study the effect of testosterone undecanoate on sexual functions, glycaemic parameters, and cardiovascular (CV) risk factors in hypogonadal men with type 2 diabetes mellitus (T2DM). Methods: It was an open label, single-arm interventional study where testosterone undecanoate (TU) was used in 105 T2DM males aged 30–60 years with hypogonadism. The effect of TU on sexual functions was assessed using the Aging Male Symptoms (AMS) Scale and the International Index of Erectile Function-5 (IIEF-5) Questionnaire. The effect on glycaemic parameters, cardiovascular risk factors (lipids, high-sensitivity C-reactive protein [hsCRP] and carotid intima media thickness [CIMT]) were assessed over a period of 54 weeks of TU therapy. Results: Prevalence of hypogonadism in T2DM patients was 19.1%, of which 74.1% had functional hypogonadism. AMS and IIEF-5 scores showed negative and positive correlation, respectively, with baseline serum testosterone levels. The AMS score showed a significant reduction of 5.8% and IIEF-5 score improved by 31.5% at 54 weeks of TU therapy. Glycosylated hemoglobin (HbA1c), homeostatic model assessment for insulin resistance (HOMA-IR), and lipids such as total cholesterol (TC), low-density lipoprotein (LDL), and triglycerides (TG) were significantly reduced by 0.6%, 10.9%, 6.28%, 9.04%, and 6.77%, respectively, at 54 weeks. CIMT was significantly reduced by 2.57% at 54 weeks, whereas no significant change observed with hsCRP. Conclusions: TU is an effective treatment modality for hypogonadal men with T2DM, and it has beneficial effects on sexual functions, glycaemic parameters, and CV risk factors.
and those with 231-346 ng/dL (8-12 nmol/L) may be considered for a 3-to 6-month trial of replacement therapy. [10,11]2DM is associated with 2-to 3-times increased cardiovascular disease (CV) risk and mortality.Reducing CV risk factors is the standard of care in T2DM to decrease the risk of cardiovascular disease (CVD) risk.Testosterone replacement therapy (TRT) has been found to have variable results on reduction of CV risk factors.
The objective of the present study was to assess the association of low serum testosterone levels with signs and symptoms of hypogonadism in T2DM and to assess the effect of TU therapy on sexual functions, glycaemic parameters (glycosylated hemoglobin [HbA1c], fasting plasma glucose [FPG], homeostasis model assessment of insulin resistance [HOMA-IR]), lipid parameters (total cholesterol, triglycerides, high-density lipoprotein [HDL], and low-density lipoprotein [LDL]), high-sensitivity C-reactive protein (hsCRP) and CIMT over a period of one year.For the present study, a serum total testosterone level of 264 ng/dL was chosen as the lower limit of normal level which is the 2.5 th percentile of normal serum total testosterone range.Testosterone undecanoate (1000 mg/4 mL) was chosen in the present study because it ensures better compliance with minimal side effects.It was given intramuscularly at 0, 6, 18, 30, and 42 weeks.
MaTerial and MeThods
It was a one-year, prospective study of 120 patients of T2DM with hypogonadism.The study period was from October 2019 to February 2021.It was conducted in the Department of Endocrinology of MKCG Medical College and Hospital.After obtaining clearance from the Institutional Ethics Committee and informed consent from patients, participants were selected according to inclusion and exclusion criteria.This study was registered in Clinical Trials Registry of India with registration number as CTRI/2019/09/021175.Inclusion Criteria: Patients with T2DM aged 30-60 years, experiencing symptoms of hypogonadism using the Aging Male Symptoms (AMS) score ≥27 and the presence of any one of the following three sexual symptoms, that is, decreased sexual interest, absent or rare morning erections, and erectile dysfunction (ED) underwent biochemical evaluation.The patients who had morning (7-9 am) serum total testosterone level of <264 ng/dL (2.5 th percentile of harmonized reference range for total testosterone in healthy, non-obese young men) [9] after re-confirmation at one week were included in the study.
The AMS scale is a self-administered scale of health-related quality of life (HRQoL). [12]The AMS scale was previously validated and often used for assessment of severity in hypogonadism.It contains 17 questions with responses ranging from a score of 0-5 with maximum score of 85.While a symptom score of ≤26 is non-significant, scores of 27-36, 37-49, and >50 are taken to be consistent with mild, moderate, and severe hypogonadism, respectively. [12]The International Index of Erectile Function (IIEF-5) Questionnaire is used for assessment of erectile function and it is a validated tool for the evaluation of ED. [13] It contains five items that focus on erectile function and intercourse satisfaction.Each question has a response ranging from 1 to 5. Thus, the total score ranges from 5 to 25. ED is classified into five severity levels based on the IIEF-5 scores: severe (5-7), moderate (8-11), mild to moderate (12-16), mild (17-21), and no ED (22-25).
Exclusion Criteria:
The following patients were excluded from the study: patients suffering from severe debilitating disease, history of prostate or breast cancer, elevated haematocrit (>48%), elevated prostate-specific antigen (PSA, >4.0 µg/L), PSA increase >1.4 µg/L within any 12-month period, palpable prostate nodule and induration on digital rectal examination (DRE), severe obstructive sleep apnoea (OSA), severe lower urinary tract symptoms (LUTS) (International Prostate Symptom Score [IPSS] >19), previously treated hypogonadism, acute coronary event in the last 6 months, chronic obstructive lung disease, and recent use of phosphodiesterase-5 inhibitors (within the last 6 months).
After selecting subjects as per the pre-defined inclusion and exclusion criteria, anthropometric parameters like height, weight, and waist circumference were measured.Body mass index (BMI) was calculated using the following formula: BMI = Weight (kg)/[Height (m) 2 ].AMS and IIEF-5 scores were recorded.Fasting blood samples were taken from 7 am to 9 a.m. to measure serum total testosterone (TT), luteinizing hormone (LH), follicle stimulating hormone (FSH), FPG, HbA1c, fasting lipids (TC, TG, HDL, LDL), hsCRP, haematocrit, serum fasting insulin (patients not receiving insulin), and serum PSA.Serum TT was done on Abbott Architect, using competitive chemiluminescent immunoassay (CLIA).Intra-and inter-assay coefficient of variability was 2.49% and 6.5%, respectively, with a range of 181-772 ng/dL.HOMA-IR was calculated as follows: HOMA-IR = Fasting serum Insulin × FPG/405 (fasting serum insulin in µIU/mL and FPG in mg/dL).Carotid intima media thickness (CIMT) of carotid arteries was done by a single experienced radiologist using General Electric Logiq S7 Expert/Pro ultrasonic device, with 9 L-D probe, frequency band 3.1-10 MHz in B mode.Intima-media thickness (IMT) was measured on three sections of both carotid arteries: communis, internal, and bulbus.At each investigated section, IMT measurements were taken three times and the average value for each section was subsequently calculated.Patients with baseline serum TT less than 231 ng/ dL were classified as having severe hypogonadism (sHG) and those who had ≥231 ng/dL to <264 ng/dL were classified as having mild hypogonadism (mHG).
Selected patients were then administered intramuscular (IM) injection of 1000 mg of testosterone undecanoate (TU) in the upper outer quadrant of the gluteal region at 0, 6, 18, 30, and 42 weeks.TT was measured at the end of each dosing interval, just prior to the next injection to maintain levels the mid-normal range (350-600 ng/dL).FPG and postprandial plasma glucose were measured at each monthly visit for maintenance of glycaemic control.Patients were re-evaluated at 6, 18, 30, 42, and 54 weeks.At each visit anthropometric parameters, AMS, and IIEF-5 scores were recorded.Fasting lipids were measured at 0, 18, 30, and 54 weeks.Serum insulin and hsCRP were measured at baseline and again at 54 weeks.PSA, haematocrit, and CIMT were measured at 0, 30, and 54 weeks.Response to TU therapy was defined for the present study as the following: those who had decrease in AMS score and/or had an increase in IIEF-5 score at 54 weeks were considered as responders, whereas those who had increase or no change of AMS score or decrease or no change in IIEF-5 scores were considered as non-responders.At any point of time, the patient was to be withdrawn from the study if any of the following parameters were reached: haematocrit >54%, PSA >4 µg/L, and palpable abnormality on DRE or IPSS score >19.Each patient was followed for 54 weeks or till he was withdrawn from the study.Patients withdrawn were not included in the final analysis.
Statistical Analysis: Data were entered into a Microsoft Excel datasheet and analysed using the Statistical Package for the Social Sciences (SPSS) 24 (IBM Corp.).Descriptive statistical methods such as mean and standard deviation were applied to summarize continuous variables.Categorical data was summarized as percentages or proportion.Paired t-test was used to compare in-between the groups, and unpaired t-tests were used to compare between groups.A P value of 0.05 was considered as significant.Graphs and charts were generated using SPSS 24 and Windows Microsoft Excel.
resulTs
A total of 850 consecutive male patients with T2DM were screened for hypogonadism.One hundred sixty-two patients (19.1%) were found to have hypogonadism, of which 42 were excluded as 29 had primary hypogonadism and 13 had secondary hypogonadism due to other causes.Fifteen out of 120 patients were not included in the final analysis as nine were lost to follow-up and six were withdrawn from the study (four had increased levels of haematocrit >54% during the study; and two had IPSS of >19).Thus, the data obtained from 105 patients with T2DM and hypogonadism formed the basis of our study [Figure 1].
Effect of TRT on anthropometric parameters:
The mean body weight reduced non-significantly by 0.09 kg at 30 weeks and significantly by 0.30 kg at 54 weeks from the baseline mean body weight of 63.31 ± 7.66 kg.Similarly, the BMI also decreased significantly by 0.11 kg/m 2 at 54 weeks, from 23.31 ± 2.48 kg/m 2 to 23.20 ± 2.45 kg/m 2 .However, the decrease in waist circumference was not significant [Table 2].
Effect of TRT on glycaemic parameters: Mean HbA1c at baseline was 7.9% ±1.0% and reduced significantly to 7.7% ±0.7% and to 7.3% ±0.4% with absolute decrease of 0.2% and 0.6% at 30 weeks and 54 weeks, respectively.FPG at baseline was 167.6 ± 33 mg/dL and reduced significantly to 136.0 ± 14.3 mg/dL at 54 weeks with an absolute decrease of 31.6 mg/dL.HOMA-IR was assessed at baseline and 54 weeks of TRT in 71 patients who were not on insulin treatment.At baseline mean, HOMA-IR was 5.5 ± 1.0 and decreased to 4.9 ± 0.9 at 54 weeks with decrease of 0.6 (10.9%) which was statistically significant [Table 2].
Effect of TRT on cardiovascular risk factors: Mean serum TC, LDL, and TG showed significant reduction of 6.28%, 9.04%, and 6.77%, respectively, at 54 weeks.Mean HDL level showed no significant change from baseline either at 30 weeks or 54 weeks of TU therapy.The TG levels showed significant reduction from 30 weeks of TU therapy.Serum hsCRP was assessed at baseline and 54 weeks of TRT.It was 2.21 ± 0.67 mg/L at baseline and was found to have a reduction of 0.13 mg/L (5.98%) at 54 weeks, but it was not statistically significant.CIMT was assessed at baseline, 30 weeks, and 54 weeks of TRT.It was 0.700 ± 0.057 mm at baseline and decreased non-significantly to 0.695 ± 0.046 mm at 30 weeks and significantly to 0.682 ± 0.042 mm at 54 weeks (2.57%) [Table 3].
Response to Testosterone Undecanoate Therapy:
In the present study, 91.4% of patients (n = 96/105) responded to TU therapy and 8.5% (n = 9/105) were non-responders.Various baseline parameters in responders and non-responders were compared and were not significantly different except in terms of the baseline serum testosterone.Responders had higher AMS score and lower IIEF-5 scores at baseline compared to that of non-responders.The mean baseline testosterone levels in the responders group was significantly lower compared to non-responders (234 vs 245 ng/dl, P = 0.05).TU therapy decreased AMS and/or increased IIEF-5 in both severe hypogonadism (sHG) and mild hypogonadism (mHG) groups.There were 97.4% of responders (n = 38/39) in the sHG group and 87.8% of responders (n = 58/66) in the mHG group [Table 4].
Sexual functions improved significantly in both mHG and sHG groups.There was differential effect of TU therapy on AMS and IIEF-5 scores.[17] However, Dhindsa et al. [18] found a prevalence of 33% in the age group of 28-80 years in T2DM patients, which could be due to older population, higher BMI, and criteria of hypogonadism used.
Anthropometric parameters showed significant improvements with TU therapy with mean decrease in body weight and BMI of 0.30 kg and 0.11 kg/m 2 , respectively, at 54 weeks.However, waist circumference showed no significant change with TU therapy.Body weight and BMI showed decreasing trend from 18 weeks onward and reached significance at 54 weeks.The decrease in BMI in the present study was lower compared to
discussion
There is a frequent association of hypogonadism in patients with T2DM; this was confirmed in the present study.In the present study, the prevalence of hypogonadism among 850 T2DM patients who were screened was 19.1%.Functional hypogonadism was the most common form of hypogonadism observed in 74.1% of patients.TU therapy in patients with hypogonadism and T2DM improved sexual functions significantly starting from 18 weeks.There was also reduction in weight and improvement in glycaemic parameters and CV risk factors at 54 weeks of TU therapy.studies by Antonič et al. [19] and Khripun et al., [20] where they observed a decrease of 0.7 kg/m 2 and 2 kg/m 2 , respectively.However, all studies are not concordant in reduction of BMI.Some studies showed no significant decrease in BMI on TU therapy. [21,22]Waist circumference in the present study showed no significant decrease at 54 weeks.Studies on waist circumference have found variable results, with some showing decrease, [19,20] some no change, [23] and some even an increase in waist circumference (WC) after TRT. [24]This decrease in body weight and BMI could be attributed to decrease in visceral adipose tissue, induction of hormone-sensitive lipase, decreased uptake of triglycerides by adipocytes, [25,26] and differentiation of pluripotent mesenchymal stem cells to myogenic lineage rather than to adipogenic lineage. [27] our study, we clinically evaluated sexual functions in patients with T2DM and hypogonadism by using two psychometric scales.AMS and IIEF -5 scores showed significant negative and positive correlation with baseline serum testosterone levels, respectively.Similar negative correlation of AMS score with serum testosterone was seen in a study by Kang et al. [28] However, it was not statistically significant.Significant improvement in AMS score started at 30 weeks and it persisted till 54 weeks.Almehmadi et al. [29] and Hackett et al. [30] showed decrease in AMS using TU at 12 weeks and 30 weeks, respectively.However, a few studies have shown no significant reduction in AMS score. [19,22,23]ED is a major concern in patients with T2DM and hypogonadism.In the present study, ED as assessed by IIEF-5 score, also showed improvement with TRT from 18 weeks onwards.Similar improvement in IIEF-5 score had been observed in studies by Almehmadi et al. [29] (using TU) and Shigehara et al. [31] (using testosterone enanthate 250 mg monthly) at 3 months and 6 months of TRT, respectively.
There was significant improvement in AMS and IIEF-5 scores after TU therapy.As IIEF-5 is more specific in nature, it showed more improvement following TU therapy compared to non-specific nature of AMS.The response to TU therapy was observed in both severe and mild hypogonadism (97% vs 87%, respectively), although it was more in severe hypogonadism compared to mild hypogonadism but could not reach statistical significance.The present study showed that men with type 2 diabetes who had lower baseline serum testosterone levels may show a better response than others.34] There was a significant reduction in HbA1c from baseline to 54 weeks (0.6%).FPG also showed significant reduction of 31.6 mg/dL over a period of 54 weeks.The decrease in HbA1c in the present study may be attributed to multiple factors like the effect of treatment with drugs, TRT, and lifestyle measures.Since there is no control group, it is difficult to attribute the reduction in HbA1c to TRT.The significant improvement in HbA1c and FPG in the present study was concordant to the study by Hackett et al., [30] showing reduction of 0.24% and 18 mg/dL using TU in severe hypogonadal men with T2DM.[21] However, studies were not concordant with regard to improvement in FPG and HbA1c on TU therapy. [23,31,35,36]Improved glycaemic profile in patients receiving TRT can be due to decreased weight, BMI, improved HOMA-IR, and increase in muscle mass.HOMA-IR was assessed at baseline and at 54 weeks in 71 patients who were not on insulin therapy and it showed a significant reduction of 0.6.Significant decrease in HOMA-IR ranging from 1.7 to 4.6 was also noted in various studies. [20,21,37]1] There was improvement in TC, TG, and LDL levels at 54 weeks, and a similar observation was found in studies by TT trials [37] and Antonič et al. [19] at one year of TU therapy.However, a few studies have found no significant change in lipid levels after TRT, [31,36] probably because most of the patients were already on long-term statin therapy.hsCRP serves as an alternative predictor of CVD, endothelial dysfunction, and as a biomarker of systemic low-grade inflammation. [42]It was reduced by 0.13 mg/L (5.98%) at 54 weeks of TU therapy but did not reach statistical significance (P = 0.072).Similar observations were made by Gianatti et al. [43] and Kapoor et al. [21] in their studies.Beneficial effects of decreasing CRP may be related to observations that CRP increases apoptosis of endothelial cells and blocks differentiation of endothelial progenitor cells, eventually blocking angiogenesis. [44]CIMT was evaluated at baseline, at 30 weeks, and 54 weeks.It showed a significant decrease of 0.018 mm (2.57%) at 54 weeks.Similar to the present study, other studies have found reduction in CIMT at 54 weeks. [45]Improvement in hsCRP and CIMT with TRT can be due to multiple mechanisms.Testosterone has a direct effect on androgen receptors present on endothelium and vascular smooth muscle cells. [46]It stimulates activity of endothelial progenitor cells and modulates vascular tone by regulating vasodilation. [47,48]It reduces ET-1 and resistin, which play a role in activation of endothelium and triggering the proliferation of vascular smooth muscle, leading to progression of atherosclerosis. [20,49,50]ld adverse event of injection site pain occurred in 90% of patients but they recovered within 1-2 days.Other mild adverse effects like nausea and vomiting were not reported in the study.Haematocrit and PSA were monitored in patients during TU therapy.Significant increase was seen in both PSA and haematocrit levels.Increase in haematocrit (>54%) leading to discontinuation was seen in four patients and they were excluded from the final analysis.None of the patients showed increase of PSA >4 ng/ml which should lead to discontinuation of therapy.However, two patients had IPSS score of >19, leading to discontinuation of treatment and exclusion of therapy.Severe adverse events like fat embolism or OSA were not reported in the present study.Only five patients missed single dose and three patients missed two doses of TU.
The main limitation of our study was that it was an open-label, single-arm intervention study.Inclusion of placebo arm could have further strengthened the results.Insulin resistance was assessed by HOMA-IR rather than by gold standard hyperinsulinemic-euglycemic clamp.In the present study, serum TT was used rather than free testosterone because of unavailability of accurate assays for the same (equilibrium dialysis).Serum testosterone was measured using CLIA; to reduce variability and confirm low levels of serum testosterone, repeat measurements were performed one week apart.However, the main strength of the study was that, only T2DM patients with clinical and biochemical criteria for hypogonadism were included.TU was used in the present study unlike previous trials where testosterone enanthate and other esters were used.Thus, institution of TRT in the form of TU in hypogonadal men with T2DM having serum TT <264 ng/dL produces beneficial effects like reduction in weight, BMI, HOMA-IR, FPG, HbA1c, and CV risk factors.These effects are over and above the improvements in sexual functions brought by TRT.
Declaration of patient consent
The authors certify that they have obtained all appropriate patient consent forms.In the form the patient(s) has/have given his/her/their consent for his/her/their images and other clinical information to be reported in the journal.The patients understand that their names and initials will not be published and due efforts will be made to conceal their identity, but anonymity cannot be guaranteed.
Financial support and sponsorship
Endocrine Society of India (ESI), Research Grant.
Figure 2 :
Figure 2: Scatter plot showing correlation of baseline serum TT with baseline AMS score and baseline IIEF-5 scores
Weeks of Follow-Up Mean±SD Difference from Baseline Percentage Change from Baseline (%) P (Compared with Baseline)
Body mass index; FPG, Fasting plasma glucose; HOMA-IR, Homeostatic Model Assessment of Insulin Resistance vs 6.3% in severe and mild hypogonadism, respectively.This could be due to the non-specific nature of AMS scores [Table5].
Table 5 : Effect of testosterone undecanoate therapy on sexual functions among patients with mild hypogonadism and severe hypogonadism
Percentage change from baseline within the group.*P for within group comparison from baseline.AMS, Aging Male Symptoms; IIEF-5, International Index of Erectile Function-5 #
|
2023-02-05T16:18:19.931Z
|
2022-11-01T00:00:00.000
|
{
"year": 2023,
"sha1": "95477921d93e6651a2ce238e4abcc39fb461ccfa",
"oa_license": "CCBYNCSA",
"oa_url": "https://doi.org/10.4103/ijem.ijem_39_22",
"oa_status": "CLOSED",
"pdf_src": "PubMedCentral",
"pdf_hash": "669a33c16da59bb3fbb1e16381594c724e52d4e9",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
}
|
261557673
|
pes2o/s2orc
|
v3-fos-license
|
Optimal Observation-Intervention Trade-Off in Optimisation Problems with Causal Structure
We consider the problem of optimising an expensive-to-evaluate grey-box objective function, within a finite budget, where known side-information exists in the form of the causal structure between the design variables. Standard black-box optimisation ignores the causal structure, often making it inefficient and expensive. The few existing methods that consider the causal structure are myopic and do not fully accommodate the observation-intervention trade-off that emerges when estimating causal effects. In this paper, we show that the observation-intervention trade-off can be formulated as a non-myopic optimal stopping problem which permits an efficient solution. We give theoretical results detailing the structure of the optimal stopping times and demonstrate the generality of our approach by showing that it can be integrated with existing causal Bayesian optimisation algorithms. Experimental results show that our formulation can enhance existing algorithms on real and synthetic benchmarks.
Introduction
This paper studies global optimisation of an expensive-to-evaluate grey-box [5] objective function with known causal structure in the form of a causal diagram (making it grey-box rather than 'black-box'). In this setting, inputs to the objective function correspond to interventions and outputs correspond to causal effects. We assume that the objective function can be evaluated (possibly with noise) at a finite number of inputs, either by measurement or some estimation procedure. Each evaluation is associated with a cost and a finite budget of total evaluations is prescribed. Since no known functional form of the objective function is available, our goal is to find an input that optimises the objective function by estimating the causal effects of a sequence of interventions. This estimation can be done in two ways: i) by intervening and conducting controlled experiments; and ii) by passively observing and using the causal graph to estimate the causal effects (vis-à-vis the do-calculus [45]). In choosing between these two options, an observation-intervention trade-off emerges. On the one hand, interventions are costly but allow us to reliably estimate causal effects. On the other hand, observations are (usually) cheap to collect but may not always be sufficient to identify causal effects. We show that this trade-off can be formulated as an optimal stopping problem that permits an efficient solution [66,54,16].
Two principal algorithmic frameworks have been developed to solve optimisation problems of the type described above: i) causal Bayesian optimisation (CBO) algorithms [2,3,60,15], which assume that the objective function is defined over a continuous domain; and ii) causal multi-armed bandit arcs to X in G. We denote by G X the mutilated graph obtained by deleting from G all arcs pointing to nodes in X. Examples of mutilated graphs are shown in Fig. 1. Estimating causal effects. The goal of causal inference is to generate probabilistic formulas for the effects of interventions in terms of observation probabilities. In this work we accomplish this by employing Pearl [45]'s do-calculus, which is an axiomatic system for replacing probability formulas containing the do-operator with ordinary conditional probabilities. Application of the do-calculus requires the interventions to be uniquely determined from P (V) and G. Determining if this is the case is known as the problem of identification and has received considerable attention in the causal inference literature [46,45,56,57,63,62], formally: Definition 1. Causal effect identifiability [6,Def. 1]. Let X, Y be two sets of disjoint variables and let G be the causal diagram. The causal effect of an intervention do(X = x) on a set of variables Y is said to be identifiable from P in G if P (Y | do(X = x)) is uniquely computable from P (V) in any causal model that induces G.
Sets of endogenous variables
Intervention sets. Given a causal graph G of an SCM M ≜ ⟨U, V, F, P (U)⟩ with a set of manipulative variables X and a target variable Y , we can define minimal intervention sets, which represent non-redundant intervention sets for achieving an effect on Y : Observation sets. The solution to the identification problem [57] tells us under what conditions the effect of a given intervention can be computed from P (V) and the causal diagram G. A number of sound and complete algorithms exist which solve this problem [55,57,61,25]. The solution, if it exists as per Def. 1, returns an expression Q X Y which only contains observational measures. The set of variables Z ⊆ V occurring in Q X Y is the minimal observation set, which we introduce and formally define as Definition 4. Minimal observation set (MOS). If P (Y | do(X = x)) is identifiable as per Def. 1 then ∃Q X Y . If a) Q X Y can be estimated by observing Z ⊆ V and b) ∄Z ′ ⊂ Z that allows us to estimate Q X Y , then Z is a MOS. The MOS which follows Q X Y is denoted by O X G,Y .
We demonstrate Def. 4 by considering the causal diagrams in Fig. 2. Applying the rules of do-calculus we can express the interventional distributions in terms of observational mass functions:
Problem statement
Consider a causal graph G that encodes the causal relationship among a finite set of variables V in a stationary SCM M = ⟨U, V, F, P (U)⟩. We are interested in manipulating X ⊆ V to minimise a target variable Y ∈ V \ X, which we assume is bounded, i.e. |y| ≤ M < ∞ for some M ∈ R and all y ∈ dom(Y ). This objective is formally expressed as We assume that interventions are atomic (also known as 'hard' [60] or 'perfect') as modelled by the do-operator [45]. 'Soft' or 'stochastic' [17] intervention settings are left for future work. We further assume that the functional relationships in M (i.e. F) are unknown (but G is assumed known), which means that minimising (3) requires estimating E[Y | do(X ′ = x ′ )] from data. This estimation can be done in two ways: i) by intervening and conducting a controlled experiment, which yields samples from the interventional distribution P (Y | do(X ′ = x ′ )); and ii) by passively observing O X ′ G,Y (see Def. 4) and using G to estimate the causal effect through the do-calculus [44] (given that the causal effect is identifiable, otherwise the causal effect has to be estimated by intervening). Both estimation procedures are perturbed by additive Gaussian noise ϵ t ∼ N (0, σ 2 ) and involve costs. Denote with c(X ′ , e t = I) the cost of estimating E[Y | do(X ′ = x ′ )] by intervening and denote with c(X ′ , e t = O) the cost of estimating the same expression by observing. The problem, then, is to design a sequence of interventions (do(X ′ t = x ′ t )) t∈{1,...,T } and a sequence of estimation procedures (e t ) t∈{1,...,T } to find an intervention that minimises (3) while keeping the cumulative cost below a maximum cost K. This problem can be formally stated as where (X * , x * ) denotes the minimiser of (3) and the expression inside the brackets of (4a) is the simple regret metric [22]. Further, (4b) -(4c) define the cost and domain constraints. The time-horizon T is defined as the largest t ∈ N for which (4b) is satisfied.
The above problem is challenging for two reasons. First, to select the optimal intervention do(X ′ t = x ′ t ) to evaluate at each stage t ∈ {1, . . . , T }, it is necessary to take into account both exploration (evaluating the causal effects in regions of high uncertainty) and exploitation (evaluating the causal effects in regions deemed promising based on previous evaluations). Second, in selecting the evaluation procedures (e t ) t∈{1,...,T } , it is necessary to balance the trade-off between intervening (estimating causal effects through controlled experimental evaluations) and observing (estimating causal effects through the do-calculus).
The exploration-exploitation trade-off is well-studied in the statistical learning literature (see textbooks [22,33]) and numerous acquisition functions that balance this trade-off have been proposed [59,22,33]. In contrast, the observation-intervention trade-off, which is the focus of this paper, is still relatively unexplored. In the following sections, we formulate this trade-off as an optimal stopping problem and present our main solution approach -Optimal Stopping for Causal Optimisation.
4 Optimal stopping formulation of the observation-intervention trade-off Figure 3: Optimal Stopping for Causal Optimisation (OSCO) to balance the intervention-observation tradeoff; an optimisation policy π O decides on a sequence of interventions (do(X t = x t )) t∈{1,...,T } to evaluate in an SCM M and the procedures to evaluate these interventions are decided by solving optimal stopping problems (M t ) t∈{1,...,T } .
We formulate the problem of designing the sequence of estimation procedures (e t ) t∈{1,...,T } as a series of optimal stopping problems [66,54,47,16]. In this formulation, we assume that an optimisation policy π O that inspects the available data and selects the intervention do(X ′ t = x ′ t ) to evaluate at each stage t, is given. We place no restrictions on how this policy is obtained or implemented. It may, for example, be derived from an acquisition function that balances the exploration-exploitation trade-off, as is done in e.g. CBO [2]. We further assume that the objective function µ (3) and the functions F of the underlying SCM are estimated by the probabilistic models µ Dt and F Dt , respectively. Here D t represents the available data at stage t of the optimisation and as |D t | → ∞, µ Dt → µ and F Dt → F.
The models µ and F allow us to guide the optimisation process and quantify the expected value and uncertainty in different regions of the interventional space (3). Specifically, F Dt allows us to estimate causal effects through the do-calculus and µ Dt represents the current knowledge of the causal effects, allowing the optimisation policy π O to make informed decisions about which intervention to evaluate at each stage.
Given the optimisation policy and the probabilistic models defined above, we seek to design the sequence (e t ) t∈{1,...,T } to optimally allocate the available evaluation budget between intervening and observing so as to minimise (4a). This task can be formally expressed as a series of Markovian and stationary optimal stopping problems M 1 , . . . , M T (see Fig. 3). To see this, note that, at any stage t of the optimisation, the models µ Dt and F Dt allow us to simulate the growth of the dataset D t and plan ahead when deciding between intervening and observing. This look-ahead planning involves two well-known challenges: i) the possibly mis-specified models µ Dt and F Dt may lead to error-propagation when simulating many steps into the future [68]; and ii) the number of possible simulation trajectories of D t is infinite, which means that the planning problem corresponds to solving an intractable Markov Decision Process (MDP) [49,67]. Most existing algorithms deal with these problems by truncating the planning horizon to one step [22,2,3,60]. We propose to instead truncate the planning horizon to the next intervention, which may involve simulating many observation steps. This means that the growth of the dataset D t follows a stationary Markov process governed by the probability law where S k ∈ S denotes the state of the process at time-step k and ⊥ is an absorbing terminal state. At each time-step k > 1 of this process, a new observation o k is sampled from F Dt and added to the dataset D t ∪ {o 2 , . . . , o k−1 }, which results in a new state S k+1 . The process is stopped whenever an intervention (e t = I) is carried out. Thus the problem of deciding between intervening and observing becomes one of optimal stopping, where the goal is to find an optimal stopping time T * : where T = inf{t : t ≥ 1, e t = I} and r(S T ) denotes the reward of intervening (stopping) at time T . Note that if the observation process has not been stopped at time k = T − 1, the cost constraint in (4b) forces it to stop at time T , even if no intervention is carried out. We refer the reader to Appendix I for background on optimal stopping theory.
Due to the Markov property, the stopping problem can equivalently be formulated as an MDP and any stopping time that satisfies (6) subject to (4b) -(4c) and (5).
By solving (6), we obtain the optimal stopping time T * , which decides the next evaluation procedure e t . In particular, if T * = 1, the causal effect is estimated by intervening (e t = I) and otherwise the causal effect is estimated by observing (e t = O). In either case, the resulting samples are used to update the probabilistic models µ Dt and F Dt and proceed to the next stage of the optimisation, wherein the next stopping problem M t+1 is defined. Note that a solution to (6) The stopping reward. A key issue in the design of the above stopping problem is the stopping reward r(S T ), which models how beneficial it is to intervene given the state S T . An intervention can be beneficial to the optimisation in two ways. First, it can improve the current estimate of the optimum. Second, it can reduce the uncertainty in the objective function µ. We model these two benefits with µ S k and the information gain measure I [18], respectively: Here r(⊥) ≜ 0 and η, τ and κ are scalar constants. The information gain I(S k ; µ) = H(S k ) − H(S k |µ) quantifies the reduction in uncertainty about µ from revealing the dataset S k , where H is the differential entropy function [18,59]. The terms µ S k (X ′ k , x ′ k ) and c(X ′ k , I) quantify the expected value and the cost of the intervention, respectively. Finally, Vol(V) Vol(S k ) denotes the convex hull of the interventional domain of V divided by the convex hull of the observations in S k . The purpose of this term is to incentivise collection of observations in the beginning of the optimisation when |D t | is small and it is not possible to plan ahead using the models µ and F (a similar term is used in [2]).
Efficient computation of the optimal stopping time
Equation (7) implies that it is optimal to intervene (stop) whenever r(S k ) ≥ α S k , where α S k denotes the second expression inside the maximisation in (7). This means that we can divide the state space into two subsets defined by where S l and C l are the stopping and continuation sets with l time-steps remaining, respectively. These sets cannot overlap and their union S l ∪ C l covers the state space S. Since the set of admissible stopping times in (6) decreases as l → 1, the stopping sets form an increasing sequence S l ⊆ S l−1 ⊆ . . . ⊆ S 1 and similarly the continuation sets form a decreasing sequence C l ⊇ C l−1 ⊇ . . . ⊇ C 1 . Using these sets, the optimal stopping time can be expressed as Based on (9)-(10) we state the following structural result regarding the optimal stopping times for the stopping problem defined in (6).
Given the stopping problem in (6), if a) the optimisation policy π O is such that µ S (π O (S)) is supermodular and c(π O (S), I) is non-increasing in |S|; and b) I(S k ; µ) is submodular, then S 1 is closed. That is, P (S k+1 = s k+1 | S k = s k ) = 0 if s k ∈ S 1 and s k+1 ̸ ∈ S 1 .
Proof. See Appendix D.
Informally, Theorem 1 states that if a state s k is encountered for which it is better to intervene than to collect one more observation and then intervene, then no matter the next observation, the next state will always satisfy the same property. This result hinges on two assumptions. Assumption a) states, informally, that as the uncertainty about µ is reduced, the optimisation policy π O explores less and instead prefers exploiting regions of the interventional space that are deemed promising based on µ S . This assumption is for example satisfied by an ϵ-greedy optimisation policy with decaying ϵ. Similarly, the informal interpretation of assumption b) is that the gain of collecting observations reduces with the number of observations. The conditions for b) to hold are given in [28, Prop. 2] and are true in general. They hold for example if µ S is a Gaussian process (GP) [59].
A direct consequence of Theorem 1 is that the optimal stopping time can be obtained from a simple rule that is efficient to implement in practice, as stated in the following corollary. Corollary 1. If assumptions a) and b) in Theorem 1 hold, then the optimal stopping time is given by and the stopping sets are all equal, i.e. S 1 = S 2 = . . . = S T −1 .
Proof. See Appendix E.
Corollary 1 states that the stopping problem in (6), which characterises the observation-intervention trade-off for the optimisation problem in (4), permits an optimal solution that is efficient to implement in practical algorithms. In the following sections, we compare this solution to existing approaches and explain how it can be integrated with existing algorithms for optimisation problems with causal structure (e.g. CBO and causal MAB algorithms). The pseudo-code for integrating the optimal stopping problem with the existing algorithms is listed in Algorithm 1 in Appendix H.
Related work
Problems of optimising decision variables arise in many settings, ranging from the control of physical and computer systems to managing entire economies [49,48]. Depending on the characteristics of the optimisation problem, different solution methods are appropriate (e.g. convex optimisation [14], dynamic programming [13] and black-box optimisation [33, 22]). We limit the following discussion to related work that studies grey-box optimisation problems with known causal structure. This line of research can be divided into two main approaches: causal BO [53]). Most of the prior work on BO is focused on the black-box setting and ignores prior knowledge about the objective function (see the recent tutorial paper by Astudillo & Frazier [5] for examples of prior knowledge). BO problems with known causal structure are usually studied under the aegis of the SCM framework, as is the case in this paper and in [2,3,60,15,4]. Astudillo & Frazier [5] departs from this idea by instead leveraging function networks in place of SCMs. Another design choice which differ among existing works is the intervention model. This paper studies the hard intervention model, which is consistent with [2,3,15], but differ from [5,60], which study the soft intervention model. Further, all of the existing works (including this paper) except [15,4] have in common that they assume the causal structure to be known. Branchini et al. [15] and Alabed & Yoneki [4] do not make this assumption and instead explore techniques that combine causal discovery with causal BO.
This work differs from the previous research on causal BO in two main ways. First, we propose a solution to balance the observation-intervention trade-off that emerges in grey-box optimisation problems with causal structure. Existing works either ignore this trade-off or rely on heuristics to balance it. Second, we quantify the costs associated with collecting observations and estimating causal effects via the do-calculus, which previous works do not (they generally assume that observations can be collected without cost).
Causal multi-armed bandits. Non-trivial dependencies amongst bandit arms are typically listed under structured bandits. When that structure is explicitly causal the namesake follows [34, 42,38]. The literature on causal bandits is richer than that of causal BO. Bareinboim et al. [8] were the first to explore the connection between causal reasoning and MAB algorithms. Lattimore et al.
[32] and Sen et al. [52] introduced methods for best-arm identification and non-trivial challenges that arise when unobserved confounders (UCs) are present in the SCM, which is explored in [34,36] where the authors introduce the notion of POMISs for graphs with and without non-manipulative variables respectively. More recently in [64] the authors prove regret bounds for causal MABs with linear SCMs, binary intervention domains and soft interventions. In the listed works thus far, the graph is assumed known. This assumption is relaxed in [39]. Another direction is budgeted MABs [40] where pulling an arm comes as a fixed cost and the agent has a finite budget which she has to spend judiciously to find the best arm subject to that limitation.
Similar to the existing work in causal BO, the main differences between this paper and the previous work on causal MAB are a) that we propose a solution to the observation-intervention trade-off; and b) we quantify the costs associated with collecting observations. Further, to our knowledge, ours is the first study that combines the structured and the budgeted MAB approaches.
Experimental evaluation
We integrate OSCO with state-of-the-art algorithms for optimisation problems with causal structure and evaluate these on a variety of synthetic and real-world SCMs with DAGs given in Fig. 4.
Baselines. Several algorithms for solving optimisation problems of the type defined in (3) (i.e. optimisation problems with causal structure) have emerged in recent years. These algorithms include CBO as those algorithms are most consistent with our problem setting and assumptions. We leave the integration of OSCO with other algorithms to future work. We also compare OSCO against three heuristic baselines: a) INTERVENE, b) OBSERVE and c) RANDOM, which correspond to policies that a) always intervene, b) always observe and c) selects between intervening and observing randomly. These latter results can be found in the appendices.
Experiment setup. We run all experiments with three different random seeds and show the convergence curves of the simple regret metric arg min k∈{1,...,t} µ(X k , (this is consistent with the metric reported in [2] but differs from [60,37] which reports the cumulative regret metric [33]). Hyperparameters are listed in Appendix G and were chosen based on cross-validation. Throughout, we assume that the dataset at the start of the optimisation is empty (D 1 = ∅). This contrasts with [2] and [60], which assume |D 1 | > 0 in all experiments. When we compare against CBO we utilise MOS (Def. 4) to reduce the observation costs. We do not utilise MOS when we compare against MCBO as MCBO relies on complete observations. Further, we only evaluate MCBO on the SCMs available in the official implementation [60], namely the chain SCM and functions models observations interventions (a) CBO with ϵ-greedy (as used in [2]). the PSA SCM. In all implementations of OSCO, we implement the stopping rule implied in Corollary 1 (even in cases when the assumptions of the corollary do not hold, in which case it provides an approximation of the optimal stopping time). Complete experimental details can be found in the appendix.
Results discussion. Figure 5 shows the estimated probabilistic models µ and F when running CBO with two different policies for balancing the intervention-observation trade-off: i) the ϵ-greedy policy used in [2]; and ii) the OSCO approach described in §4. We note in the lowest plots that the optimal intervention is do(Z = −3.20) (with target value Y = −2.17) and that this intervention is found in both cases. We further note that CBO with OSCO is able to accurately estimate the interventional distributions through the do-calculus. The main differences between CBO with ϵ-greedy and CBO with OSCO are a) CBO with ϵ-greedy collects only 3 observations, spending most of the evaluation budget on interventions, whereas CBO with OSCO uses most of the evaluation budget to collect observations; and b) that CBO with ϵ-greedy observes all endogenous variables whereas CBO with OSCO only observes the MOS That CBO uses most of the evaluation budget on intervening whereas CBO with OSCO uses most of the budget on observations can be explained by two main reasons. First, the definition of ϵ in [2, Eq. 6] implies that the probability of observing in CBO with ϵ-greedy is close to 0 when the number of previously collected observations is low. Second, the optimal stopping formulation in (11) implies that CBO with OSCO will observe rather than intervene when it is more cost-effective. Figure 6 compares OSCO with baselines. The first row in Fig. 6 shows convergence curves of CBO and MCBO with and without OSCO for the chain SCM ( Fig. 4a and Fig. 4b -with and without an UC respectively), the synthetic SCM (Fig. 4d) and the PSA SCM (Fig. 4c). An ablation study for different observation costs is shown in the two left-most plots of the second row of Fig. 6. The right-most plots in the second row of Fig. 6 show a) converge curves of C-UCB with and without OSCO for the bandit version of the synthetic SCM (Fig. 4d); and b) the computational overhead of OSCO. We note that CBO with OSCO outperforms CBO and finds the optimal intervention for all SCMs within the prescribed evaluation budget. Similarly, we observe that C-UCB with OSCO is more cost-efficient than C-UCB without OSCO for the synthetic SCM. We explain the efficient convergence of OSCO by its design, which a) uses look-ahead-planning to decide between observing and intervening based on what is most cost effective; and b) utilises MOS (Def. 4) to limit the variables that need to be observed. We further note that MCBO with OSCO outperforms MCBO on the chain and PSA SCMs. Moreover, we observe that the performance of CBO is better than that of MCBO on average, which is consistent with the results reported in [60]. This result can be explained by the design of MCBO, which is optimised for the cumulative regret metric rather than the simple regret. Finally, we observe that the computational overhead of OSCO per iteration is less than a factor of 2. Extended evaluation results can be found in Appendix F. We have formally defined the observationintervention trade-off that emerges in optimisation problems with causal structure and have shown that this trade-off can be formulated as a non-myopic optimal stopping problem whose solution determines when a causal effect should be estimated by intervening and when it is more cost-effective to collect observational data. We have also characterised the minimal set of variables that need to be observed to estimate the causal effect -the minimal observation set (MOS). Extensive evaluation results on real and synthetic SCMs show that the optimal stopping formulation can enhance existing algorithms and that the computational overhead is manageable. This paper opens up several directions for future research. One direction is to extend our model to include soft interventions and longer planning horizons. Another direction is to evaluate different reward functions in the optimal stopping problem.
What's the Least Expensive Cholesterol Medication in 2021?
Appendices Appendix A Notation
Random variables are denoted by upper-case letters (e.g. X) and their values by lower-case letters (e.g. x). The probability mass or density of a random variable X is denoted by P (X). We use x ∼ P (X) to denote that x was sampled from P (X). The expectation of a function f with respect to a random variable X is denoted by E X [f ]. Sets of variables and their values are noted by bold upper-case and lower-case letters respectively (e.g. x and X). Operators, function spaces and tuples are represented with upper case calligraphic letters (e.g. M). The power set of a set X is denoted with P(X). The set of all probability distributions over a set X (i.e. the (|X| − 1)dimensional unit simplex) is denoted with ∆(X). We make extensive use of the do-calculus (for details see [45, §3.4]). The domain of a variable is denoted by dom(·) where e.g. x ∈ dom(X) and The set of real numbers and the set of n-dimensional real vectors are denoted with R and R n respectively. We adopt family relationships pa(X) G , ch(X) G , an(X) G and de(X) G to denote parents, children, ancestors and descendants of a given variable X in a graph G; Pa, Ch, An and De extends pa, ch, an and de by including the argument as the result. For example Pa(X) G = pa(X) G ∪ {X}. With a set of variables as argument, pa(X) G = X∈X pa(X) G and similarly defined for other relations.
MOS (Def. 4) for an identifiable intervention
optimal intervention set X * and intervention levels x * e estimation procedure, e ∈ {I, O} I estimation by intervention O estimation by observation using procedure e ∈ {I, O} T time horizon of the optimisation K maximum evaluation cost D t dataset of measured observations and interventions at stage t of the optimisation µ Dt probabilistic model of µ based on D t F Dt probabilistic model of F based on D t M t stopping problem at stage t of the optimisation S k , o k state and observation at stage k of an optimal stopping problem ⊥ terminal state of an optimal stopping problem r(S k ) stopping reward at stage k of an optimal stopping problem T stopping time γ discount factor for an optimal stopping problem S , C stopping and continuation sets for an optimal stopping problem
Appendix B Modelling assumptions
This sections contains two tables which detail our modelling assumptions on the causal inference and optimal stopping sides respectively. Appendix C Causal-effect derivation Derivation [7] for estimating the causal effect of X on Y in Fig. 1a. We use the shorthand P X (Y ) here to denote the interventional distribution P (Y | do(X = x)). Proof. Summing over Z gives By C-component factorisation, we have The third rule of do-calculus can be applied using the independence (Y ⊥ Z|X) G X,Y (see Fig. 1 The second rule of do-calculus can be applied using the independence (X ⊥ Z) G X (see Fig. 1).
Task 2: Compute P X,Z (Y ) The third rule of do-calculus can be applied using the independence (X ⊥ Y |Z) G X,Z (see Fig. 1).
We compute the effect inside the sum. By the chain rule, we have The third rule of do-calculus can be applied using the independence (Z ⊥ X ′ ) G Z (see Fig. 1).
Appendix D Proof of Theorem 1
Proof. For ease of notation, let µ S k , c O , and V S k be a shorthands for Vol(S k ) , respectively.
Equation (23) follows from the Bellman equation (7) and (24) which is implied by assumption a) and S k ⊂ S k+1 . Similarly, (29) and (30) follows from stationarity of P St and submodularity of I, respectively. More specifically, (30) holds because S k ⊂ S k+1 and , which is implied by assumption b). Finally, (31) follows from (7).
Appendix F Additional evaluation results
This appendix contains additional evaluation results, complementing those in the main body of the paper. Appendix F.1 contains results for the chain SCM (see Fig. 4a); Appendix F.2 contains results for the chain SCM with an unobserved confounder (see Fig. 4b); Appendix F.3 contains results for the PSA SCM (see Fig. 4c); and Appendix F.4 contains results for the synthetic SCM with causal graph in Fig. 2.
To re-emphasise, in all experiments, we only explore the POMISs for each SCM -for a complete pseudo-algorithm see Algorithm 1.
F.1 Chain SCM
The chain SCM (see Fig. 4a) is a synthetic SCM that is benchmarked in both [2] and [60]. Figure 5 and Fig. 8 show the estimated probabilistic models µ and F when running CBO with four different policies for balancing the intervention-observation trade-off: i) the ϵ-greedy policy used in [2]; ii) the OSCO approach described in §4; iii) the OBSERVE baseline (which always observes); and iv) the RANDOM baseline, which selects between intervening and observing uniformly at random. (Fig. 8a) shows results when using a policy that always observes and the right plot (Fig. 8b) shows results when using a policy that selects randomly between observing and intervening; the blue curves and the shaded blue areas show the mean and standard deviation of the estimated models F and µ; the red and orange dots show observations and interventions; the black lines show the SCM functions F and the causal effects µ.
We note in the lowest plots that the optimal intervention is do(Z = −3.20) (with target value Y = −2.17) and that this intervention is found in all cases except for the OBSERVE baseline (Fig. 8a). That the OBSERVE baseline does not find the optimal intervention is expected as the probability of observing the optimal configuration without intervening is low. We further note that all policies that collect observations are able to accurately estimate the interventional distributions through the do-calculus (see e.g. Fig. 8a and Fig. 5b). The main differences between CBO with ϵ-greedy and CBO with OSCO (see Fig. 5) are a) CBO with ϵ-greedy collects only 3 observations, spending most of the evaluation budget on interventions, whereas CBO with OSCO uses most of the evaluation budget to collect observations; and b) that CBO with ϵ-greedy observes all endogeneous variables whereas CBO with OSCO only observes the MOS O Z G,Y = {Z, Y }. That CBO uses most of the evaluation budget on intervening whereas CBO with OSCO uses most of the budget on observations can be explained by two main reasons. First, the definition of ϵ in [2,Eq. 6] implies that the probability of observing in CBO with ϵ-greedy is close to 0 when the number of previously collected observations is low. Second, the optimal stopping formulation in (11) implies that CBO with OSCO will observe rather than intervene when it is more cost-effective. Figure 9 shows convergence curves of CBO with OSCO and the baselines introduced above for different observation costs c(X, O). We note that the OBSERVE and RANDOM baselines do not always converge to the optimum within the prescribed evaluation budget (see Appendix G for the list of hyperparameters). We further observe that CBO with OSCO on average reaches the optimum at a lower cost than all baselines.
We also compare the performance of OSCO on the chain SCM with the performance of MCBO [60]. Figure 10 shows convergence curves of MCBO with OSCO and the baselines introduced in §6 for different observation costs c(X, O). We note that none of the MCBO policies find the optimum. This is consistent with the findings in [60, Fig. 6] and can be explained by the design of MCBO, which is optimised for minimising the cumulative regret rather than than the simple regret. We further note that the measured benefit of adding OSCO to MCBO for the chain SCM is lower than that of CBO (cf. Fig. 10). More specifically, when the observation cost is low (see e.g. the right plot in Fig. 10), MCBO with OSCO yields better results than plain MCBO. When the observation costs are high however (see e.g. the left plot in Fig. 10), MCBO with OSCO leads to slower convergence. One reason why OSCO works better with CBO than MCBO is that it can utilise MOS (Def. 4) to limit the number of observations (see §6 for details). We speculate that another reason is the different way of integrating observational data in the probabilistic models µ and F. In CBO, the observational data is integrated with the interventional data by using a causal prior on the interventional distributions whereas in MCBO the functions F are fitted directly based on both observational and interventional data.
F.2 Chain SCM with an unobserved confounder Figure 11 and Fig. 12 show convergence curves of CBO and MCBO with different policies for balancing the intervention-observation trade-off for the chain SCM with an unobserved confounder (see Fig. 4b).
We observe that CBO with OSCO performs best on average and that the results resemble those obtained for the chain SCM without the unobserved confounder (see Appendix F.1). Looking at the second and third rows in the figures, we see that CBO and MCBO with OSCO focuses on collecting observations in the beginning of the optimisation and then successively increases the frequency of interventions. This contrasts with CBO and MCBO without OSCO, which almost exclusively intervenes.
F.3 PSA SCM
The PSA SCM (see Fig. 4c) is based on a real healthcare setting [20] where interventions correspond to dosage prescriptions of statins and/or aspirin to control Prostate-Specific Antingen (PSA) levels, which should be minimised. This SCM is benchmarked in both [2] and [60]. Figure 13 and Fig. 14 show convergence curves of CBO and MCBO with OSCO and the baselines introduced in §6 for different observation costs c(X, O). We note that the only policies that consistently find the optimum are CBO, CBO with OSCO, MCBO, and MCBO with OSCO. We also note that OSCO performs, on average, better than CBO and MCBO. The differences are however relatively small. We believe that the reason why the differences are relatively small is that both CBO and MCBO finds the optimum in the chain SCM after only a couple of interventions, diminishing the need to utilise observational data. This is because the optimal intervention in the PSA SCM is at the endpoints of the domains, i.e. (X * , x * ) = ({aspirin, statin}, (0, 1)), which is easy to find. Figure 15 shows convergence curves of CBO with different policies for balancing the interventionobservation trade-off for the synthetic SCM with the causal graph in Fig. 2a. We observe that both the policy that always observes and CBO with OSCO performs best on average. This result suggests to us that the most cost-effective way to find the optimal intervention for this SCM is to collect observations and estimate the causal effects via the do-calculus. Table 4.
G.1 Hyperparameters for the chain SCM
initial dataset of observations and interventions ∅ µ probabilistic model of µ, see (3) Gaussian process (GP) F probabilistic model of F intervention costs c(L, I) = |L|2 4 ∀L ∈ P(X) γ discount factor for the stopping problem 1 MCBO batch size batch size for π O in MCBO 32 MCBO β exploration-exploitation parameter in MCBO 0.5
G.2 Hyperparameters for the chain SCM with an unobserved confounder
The SCM in Fig. 1a uses the same hyperparameters as Fig. 4a listed in Table 4, with the differences that a) there is an unobserved confounder
G.3 Hyperparameters for the synthetic example SCM
The functions (F) for the synthetic example SCM in Fig. 2a, adapted from [2], are given by (unobserved confounder between S and Y ) U ZY = ϵ ZY (unobserved confounder between Z and Y ) initial dataset of observations and interventions ∅ µ probabilistic model of µ, see (3) Gaussian process (GP) F probabilistic model of F intervention costs c(L, I) = |L|2 4 ∀L ∈ P(X) γ discount factor for the stopping problem 1
G.4 Hyperparameters for the PSA SCM
The DAG in Fig. 4c describes the causal relationships between statin (node D), aspirin (node C) and prostate-specific antigen (PSA) level (node F ), mediated by a set of non-manipulative variables, adapted from [20]. We use the same SCM as Aglietti et al. [2,Appendix §5].
In (39), U(a, b) denotes the continuous uniform distribution on the interval [a, b], N (µ, σ 2 ) denotes the univariate Gaussian distribution with mean µ and variance σ 2 and σ(x) denotes the sigmoid function 1 1+e −x . intervention costs c(L, I) = |L|2 4 ∀L ∈ P(X) γ discount factor for the stopping problem 1 MCBO batch size batch size for π O in MCBO 32 MCBO β exploration-exploitation parameter in MCBO 0.5 † The domains used in this experiment are the 25th and 75th percentiles of the measured variables, found in [20, Table 1]. ‡ Strictly this is a discrete variable which we have made continuous for computational reasons.
G.5 Synthetic causal MAB
We consider the causal MAB setting in which the causal structure is provided by the SCM with DAG G in Fig. 2a.
where ⊕ is the exclusive-or function. The set of POMISs, the set of MISs and the MOS for each intervention can be found in Table 5.
Appendix H Pseudocode and implementation of OSCO
The optimal stopping problem described §4 can be integrated with existing causal optimisation algorithms to balance the intervention-observation trade-off. More specifically, given an optimisation policy π O that determines which intervention to evaluate at each stage of the optimisation, the solution to the optimal stopping problem in (6) determines whether the intervention should be evaluated by intervention or observation. (π O may for example be implemented by the CBO algorithm [2, Alg. 1] or the causal MAB algorithm in [37, Alg.1 ]) The pseudocode for integrating the optimal stopping problem with the existing algorithms is listed in Algorithm 1. The main computational complexity of the integration is the repeated solving of (11), which requires evaluating a potentially high-dimensional integral and evaluating the optimisation policy π O several times. The integral can be evaluated efficiently using Monte-Carlo methods [51,19] and the evaluations of π O (which may involve optimisation of an acquisition function as is e.g. the case in CBO [2]) can be done in parallel. The average execution times per iteration when running CBO and MCBO with and without OSCO are shown in Fig. 17.
I.1 Markov decision processes
A Markov Decision Process (MDP) models the control of a discrete-time dynamical system that evolves in time-steps from t = 1 to t = T and is defined by the seven-tuple [11,49]: M = ⟨S, A, P M , r, γ, ρ 1 , T ⟩ (42) S ⊆ R n denotes the set of states, A ⊆ R m denotes the set of actions, γ ∈ [0, 1] is a discount factor, ρ 1 : S → [0, 1] is the initial state distribution and T is the time horizon. P M (S t+1 = s t+1 | S t = s t , A t = a t ) refers to the probability of transitioning from state s t to state s t+1 when taking action a t and satisfies the Markov property P M (S t+1 = s t+1 | S t = s t ) = P M (S t+1 = s t+1 | S 1 = s 1 , . . . , S t = s t ), where s t ∈ S and a t ∈ A are realisations of the random vectors S t and A t . Similarly, r(s t , a t ) ∈ R is the reward when taking action a t in state s t , which we assume is bounded, i.e. |r(s t , a t )| ≤ M < ∞ for some M ∈ R. If P M and r(s t , a t ) are independent of the time-step t, the MDP is said to be stationary and if S and A are finite, the MDP is said to be finite.
A policy is a function π : {1, . . . , T } × S → ∆(A). If a policy is independent of the time-step t given the current state, it is called stationary. An optimal policy π * maximizes the expected discounted cumulative reward over the time horizon: where Π is the policy space, R t ∈ R is a random variable representing the reward at time t and E π denotes the expectation of the random vectors and variables (S t , R t , A t ) t=1,...,T under policy π. The Bellman equations relate any optimal policy π * to the two value functions V * : S → R and Q * : S × A → R [12]: Q * (s t , a t ) = E St+1 R t+1 + γV * (S t+1 )|S t = s t , A t = a t (45) π * (s t ) ∈ argmax at∈A Q * (s t , a t ) where V * (s t ) and Q * (s t , a t ) denote the expected cumulative discounted reward under π * for each state and state-action pair, respectively. Solving (44) -(45) means computing the value functions from which an optimal policy can be obtained via (46).
I.2 Markovian optimal stopping problems
Optimal stopping is a classical problem domain with a well-developed theory [66,54,47,16,13,10,49,24]. Many variants of the optimal stopping problem have been studied. For example, discretetime and continuous-time problems, stationary and non-stationary problems and Markovian and non-Markovian problems. As a consequence, different solution methods for these variants have been developed. The most commonly used methods are the martingale approach [47,16,58] and the Markovian approach [54,13,49,50,10].
In this paper, we focus on a stationary optimal stopping problem with a finite time horizon T , discretetime progression, a continuous state space S ⊂ R n , bounded rewards and the Markov property. We use the Markovian solution approach and model the problem as a stationary MDP M, where the system state evolves as a discrete-time Markov process (S t ) T t=1 . Here S t ∈ S and s t denotes the realization of S t . At each time-step t of this process, two actions are available: "stop" (S) and "continue" (C) i.e. (A t ∈ {S, C}) t=1,...,T . The stop action yields a reward r(s t , S) and terminates the process. In contrast, the continue action causes the process to transition to the next state according to the transition probabilities P M and yields the reward r(s t , C).
A stopping time is a positive random variable 1 ≤ T ≤ T that is dependent on s 1 , . . . , s T and independent of s T +1 , . . . s T [47]: T = inf{t : t ≥ 1, a t = S},
|
2023-09-07T06:42:06.717Z
|
2023-09-05T00:00:00.000
|
{
"year": 2023,
"sha1": "2d3ebef667b81dc3defc95e50013684efdc36188",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "2d3ebef667b81dc3defc95e50013684efdc36188",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
251678314
|
pes2o/s2orc
|
v3-fos-license
|
Physical Activity, Energy Expenditure, Screen Time and Social Support in Spanish Adolescents—Towards an Explanatory Model about Health Risk Factors
Youth obesity has been a pandemic for decades. One of its causes is a low level of physical activity. It is necessary to know the specific situation of adolescents and the factors that influence it in order to be able to act accordingly. The first aim of the current study is to create an explanatory model to establish the relationships between light physical activity time, light physical activity energy expenditure, screen time and social support. The second aim is to propose a theoretical model specifying the relationships between moderate–vigorous physical activity time, moderate–vigorous physical activity energy expenditure, screen time and social support. The study design was non-experimental (ex post facto), descriptive-correlational and cross-sectional. A total of 694 adolescents from the region of Soria (12–17 years) participated in the study. The instruments administered were the Four by One-Day Physical Activity Questionnaire, Parent Support Scale and Peer Support Scale. Two structural equation models were developed to analyse the relationships between the variables that comprised the explanatory models. The results show that social support had a negative influence on screen time in the proposed model in relation to light physical activity (r = −0.210; p ≤ 0.001) and in the proposed one regarding moderate–vigorous physical activity (r = −0.173; p ≤ 0.05). Social support was negatively related to light physical activity time (r = −0.167; p ≤ 0.05). Family support had a greater influence than did peer support. In conclusion, the models for light and moderate–vigorous physical activity are useful to describe the relationships between time, energy expenditure, screen time and social support.
Introduction
Global obesity has tripled since 1975, with more than 340 million children and adolescents overweight or obese in 2016 [1]. Moreover, levels continue to rise [2]. Sedentary lifestyles and eating habits are two of the causes of this pandemic situation [2,3]. Because of this, health and physical activity (PA) are one of the issues of greatest concern in today's society [4,5]. This concern is to some extent due to the fact that PA practice has numerous health benefits [6,7], which include the prevention of overweight/obesity [8]; however, the current levels of practice are low [1, 4,5]. For PA to be beneficial, it should be performed according to recommendations, which for children and adolescents is at least 60 min/day of moderate-vigorous physical activity (MVPA) [5,[9][10][11].
Despite social concern and evidence regarding the benefits of PA practice, more than 70% of adolescents are inactive [12,13], i.e., they do not meet PA practice recommendations [9]. Moreover, these levels decrease with age, with a critical period of decline around the age of 9 years, which is more pronounced in girls than in boys [14]. These levels have worsened since the beginning of the pandemic caused by COVID-19 [15,16].
In order to be able to go deeper into the problem of low levels of physical activity in adolescents, it is necessary to know the types of PA according to their intensity and the factors that condition it. On the one hand, PA can be classified according to its intensity by expressing it in metabolic equivalents (METS). For this purpose, a METS is considered to be the energy equivalent expended by an individual while seated at rest [9]. MVPA is that type of activity that involves an expenditure of at least 3 METS/h.
If it is less than 3 METS, it is considered as light physical activity (LPA) [17]. On the other hand, PA practice is conditioned by several correlates, which vary in type and intensity, depending on age. In adolescents, sedentary behaviour outside the school day and social support (SS) from parents and significant others are among the most significantly influential determinants, with negative and positive relationships, respectively [18,19].
Adolescents spend part of their leisure time in sedentary activities [20], such as screenbased activities (e.g., playing computer or video games or watching TV) [21]. The time spent on these activities exceeds the WHO maximum recommended time of 2 h per day [22][23][24]. In some cases, it even exceeds the time spent on PA [25]. Furthermore, the relationship between PA and screen time (ST) is negative and significant [18,26,27].
SS perceived by adolescents from parents and peers have the greatest influence on their PA levels [28]. Moreover, the influence they perceive from peers is mostly higher than the influence from their parents, although this varies by country [29]. In any case, the relationship between PA and SS is positive and significant [18,19,30].
Although there is evidence on the relationship between PA and ST, as well as PA and SS, no studies have been found in which a model has been presented that justifies the relationships between the three elements for the adolescent population through direct measurements. In this model, SS could play an important role [31]. The existence of such a model would help to better understand the reality of a population at a given time. In this way, it would be possible to better adapt the actions needed to improve PA levels [32]. Moreover, it would be interesting to know how this model performs depending on the type of PA intensity.
Based on the above, our study was proposed with the following objectives: (1) to create an explanatory model to establish the relationships between LPA time, LPA energy expenditure, screen time and social support of adolescents and (2) to propose a theoretical model that specifies the relationships between MVPA time, MVPA energy expenditure, screen time and social support of young people.
Design and Subjects
The study is framed within the physical activity and health paradigm [33] and behavioural epidemiology [34]. Furthermore, the method is non-experimental (ex post facto), cross-sectional, descriptive and correlational [35] of physical activity, social support and screen time in Spanish adolescents.
The research involved adolescents from Soria (Spain) aged between 12 and 17 years (14.06 ± 1.27). The population in the region of Soria is 3224 people. The sample study was non-probabilistic and by convenience. The final sample was 694 people, which means a precision error of 3.3%. According to sex, 364 were boys (52.4%) and 330 were girls (47.6%). Out of 19 schools, 17 schools agreed to participate, from each of which a class group of students was selected as potential participants. The criterion of accessibility was followed for the selection of the groups, favouring that those from each centre could answer the questionnaires on the same days.
Instruments and Variables
The use of the Four by One-Day Physical Activity Questionnaire (FBODPAQ) has made it possible to measure physical activity levels. This instrument was designed by
Procedure
The study began by performing a documentary search on the research topic. Afterwards, the research project was drafted, focused on investigating the relationships between PA, ST and SS of adolescent pupils. This was a novelty, as no previous study was found for this purpose.
The research project was based on the ethical principles established in the Declaration of Helsinki. Furthermore, it was approved by the Ethics Committee of the University of Granada (1478/CEIH/2020). In addition, permission for access to the educational centres was obtained from the regional director of education in Soria. In addition, an informed consent form was provided in advance to the adolescents. This had to be signed by their legal guardians and delivered to the research team before the first day of administration of the instruments.
Subsequently, the data obtained were analysed statistically and linked to the previously existing scientific evidence.
Data Analysis
The statistical software IBM SPSS Statics 26.0 (IBM Corp, Armonk, NY, USA) was used to create the data matrix and perform the descriptive analysis. The Kolmogórov-Smirnov test was used to check that the variables followed a normal distribution. In addition, Cronbach's Alpha test was applied to calculate the reliability of the research instruments.
In addition to the previous software, the IBM SPSS Amos 26.0 (IBM Corp, Armonk, NY, USA) was used. This allowed us to create the structural equation models, and as a consequence, to be able to analyse the relationships between the variables that made up the theoretical model. One of these models (Figure 1) includes five observed or endogenous variables: EE in LPA, time in LPA, ST, parental support and support from friends. In addition, the unobserved or exogenous variable SS was included. In the other model ( Figure 2), the same variables were used; however, those relating to EE and time were derived from MVPA.
We decided to analyse the relationship of variables in two theoretical models, one for LPA and the other for MVPA because the factors that influence each type of PA and the health outcomes are different. This is because the scientific evidence mentions that MVPA has greater health benefits-for example, reducingthe risk factors associated with cardiovascular disease or obesity [17]. In addition, international practice recommendations are relative to the time of performing MVPA [5,[9][10][11].
In contrast, LPA has fewer health effects, is not counted in compliance with PA practice recommendations, and is linked to sedentary activities [17]. With respect to the endogenous variables that comprise the models, the measurement error of these variables is incorporated. This is a consequence of the causal explanation of the observed associations between indicators and measurement reliability. In addition, the one-way arrows represent associations between indicators and measurement reliability. In addition, the one-way arrows represent the lines of influence between the latent variables. This allows for their interpretation with the incorporation of the regression weights.
Finally, the fit of the proposed models was assessed. The goodness of fit has to be evaluated with respect to Chi-Square, where a correct fit was obtained based on the associated p-values to be non-significant [45,46]. Likewise, the comparative fit index (CFI) has to be higher than 0.95, with a normal fit index (NFI) score higher than 0.90, incremental fit index (IFI) higher than 0.90, Tucker-Lewis index (TLI) higher than 0.90 and root mean square error of approximation (RMSEA) lower than 0.1 [47,48]. arrows represent the lines of influence between the latent variables. This allows for their interpretation with the incorporation of the regression weights.
Finally, the fit of the proposed models was assessed. The goodness of fit has to be evaluated with respect to Chi-Square, where a correct fit was obtained based on the associated p-values to be non-significant [45,46]. Likewise, the comparative fit index (CFI) has to be higher than 0.95, with a normal fit index (NFI) score higher than 0.90, incremental fit index (IFI) higher than 0.90, Tucker-Lewis index (TLI) higher than 0.90 and root mean square error of approximation (RMSEA) lower than 0.1 [47,48]. Finally, the fit of the proposed models was assessed. The goodness of fit has to be evaluated with respect to Chi-Square, where a correct fit was obtained based on the associated p-values to be non-significant [45,46]. Likewise, the comparative fit index (CFI) has to be higher than 0.95, with a normal fit index (NFI) score higher than 0.90, incremental fit index (IFI) higher than 0.90, Tucker-Lewis index (TLI) higher than 0.90 and root mean square error of approximation (RMSEA) lower than 0.1 [47,48].
Results
The model developed through the variables in a representative sample of adolescents in the region of Soria shows a good fit for each of the indices that comprise it. Focusing attention on the model developed for the practice of light physical activity showed a significant p-value (X 2 = 8.489; df = 2; pl = 0.014). However, due to the influence of sample susceptibility and sample size, the data cannot be interpreted in an independent way [49]; therefore, other standardised fit indices have been used. In this case, the CFI scored 0.993, the NFI reflected a value of 0.991, the IFI showed a score of 0.993, the TLI showed a score of 0.965, and finally the RMSEA reflected a score of 0.095.
In this case, focusing on what is shown in Table 1 and Figure 3, the SS variable shows negative relationships with the practice of LPA (r = −0.167; p < 0.05) and with ST (r = −0.210; p < 0.001). However, positive relationships are observed with support from friends (r = 0.686), with family support (r = 0.871; p < 0.001) and with EE (r = 0.001). Following with the time spent practising LPA, a positive relationship was observed with ST (r = 0.239; p <0.001) and with EE (r = 0.944; p < 0.001). Finally, regarding the relationship between EE and ST, a negative relationship was observed (r = −0.067; p < 0.001).
Results
The model developed through the variables in a representative sample of adolescents in the region of Soria shows a good fit for each of the indices that comprise it. Focusing attention on the model developed for the practice of light physical activity showed a significant p-value (X 2 = 8.489; df = 2; pl = 0.014). However, due to the influence of sample susceptibility and sample size, the data cannot be interpreted in an independent way [49]; therefore, other standardised fit indices have been used. In this case, the CFI scored 0.993, the NFI reflected a value of 0.991, the IFI showed a score of 0.993, the TLI showed a score of 0.965, and finally the RMSEA reflected a score of 0.095.
In this case, focusing on what is shown in Table 1 and Figure Proceeding with the model developed for MVPA, a good fit is observed for each of its component indices. In this case, the Chi-Square showed a significant p-value (X 2 = 1.236; df = 2; pl = 0.539). Likewise, the CFI scored 0.999, the NFI reflected a value of 0.999, the IFI score was 0.991, the TLI showed a score of 0.994, and finally the RMSEA reflected a score of 0.004.
In this case, Figure 4 and Table 2 show the existing relationships for participants practising MVPA. Focusing attention on SS, a negative relationship with ST is observed (r = −0.173; p < 0.05). However, positive relationships are shown with EE (r = 0.328: p < 0.001), support from friends (r = 0.818), support from family (r = 0.834; p < 0.001) and time spent practising MVPA (r = 0.033). Continuing with ST, negative relationships are shown with EE (r = −0.120; p < 0.05) and time of MVPA (r = −0.015). Finally, a negative relationship is shown between ST and EE (r = −0.120; p < 0.05). score was 0.991, the TLI showed a score of 0.994, and finally the RMSEA reflected a score of 0.004.
In this case, Figure 4 and Table 2 show the existing relationships for participants practising MVPA. Focusing attention on SS, a negative relationship with ST is observed (r = −0.173; p < 0.05). However, positive relationships are shown with EE (r = 0.328: p < 0.001), support from friends (r = 0.818), support from family (r = 0.834; p < 0.001) and time spent practising MVPA (r = 0.033). Continuing with ST, negative relationships are shown with EE (r = −0.120; p < 0.05) and time of MVPA (r = −0.015). Finally, a negative relationship is shown between ST and EE (r = −0.120; p < 0.05).
Discussion
The theoretical models presented serve to explain the relationships between SS, the dimensions (time and EE) of PA and ST of adolescents in the region of Soria. There are significant differences between the LPA model and the MVPA model. Next, we will compare both models and proceed to a discussion based on the scientific literature.
In both models presented in this study, the importance of social support as a determinant of physical activity is perceived. With respect to the LPA model, SS was slightly, negatively and significantly related to LPA time. In contrast, the relationship between SS and EE in LPA was positive and not significant. Other links are also observed in the MVPA model. The relationship between SS and MVPA time was positive and not significant. In contrast, the relationship between SS and EE in MVPA was positive but significant.
The differences in the relationships found between SS and PA may be due to the way in which the EE of PA is calculated as a function of intensity. In FBODPAQ, five categories of PA are differentiated [36][37][38]: very mild (1.5 METS/h), mild (2.5 METS/h), moderate (4 METS/h), severe (6 METS/h) and very severe (10 METS/h). Therefore, the intensity stipulated in the questionnaire protocol was considered for the calculation of EE.
In contrast to this, for the present study, and based on the trend in the scientific literature [5,[9][10][11]17], we decided to regroup the categories nominally into two: LPA (<3 METS/h) and MVPA (≥3 METS/h) as in previous studies [50,51]. However, this was not the case for the time computation, as it was the sum of PA times according to intensities.
As a consequence, it can be deduced that the activities with the highest average METS of each of the source categories substantially influenced the EE calculation, with those of high intensity playing a particularly important role. Likewise, it would be convenient to differentiate the type of PA intensity when measuring the degree of compliance with the practice recommendations. Furthermore, this explanation justifies the relationships between time and EE for both LPA and MVPA, with both being almost perfect and highly significant.
Little evidence has been found to compare the relationship of SS and time in LPA obtained with that of other studies. Lawman and Wilson [52] found that parental SS was positively and significantly related to the LPA of obese underserved adolescents. This relationship with MVPA was positive but not significant. In addition, Huffman et al. found that tangible parental support was positively related to minutes of LPA [53]. The relationship between SS and MVPA time of the Soria adolescents is similar to that found in other studies, being positive.
The study by Wang et al. [54] showed that higher levels of family support in Chinese adolescents were less likely to have insufficient PA. Engels et al. [55] found a positive relationship between friends' support and MVPA in Hawaiian adolescents. Furthermore, in the case of family support, the relationship was significant. Pluta and colleagues [56] also found positive and significant relationships in both cases and even with teacher support in adolescents from Wielkopolska (Poland). Although it was not considered in the study, it should also be considered that not all types of social support are equally influential. Tangible support appears to be one of the most important [57].
In this study, conducted with an adolescent population in Soria, it can be observed that the relationship between SS for PA and ST was negative, weak and significant. This is true for both explanatory models. This result was similar to the study by Park and Park [58] regarding the relationships of parental SS for PA, PA, ST and body weight of US high-school students in an explanatory model.
Costigan et al. [59], in a review of an adolescent female population, highlighted three of the four studies on the subject. They found a negative relationship between screen-based sedentary behaviour and socialising/SS, one of which was significant. These authors highlighted that three of the four studies on the subject found a negative relationship between screen-based sedentary behaviour and socialising/SS, one of which was significant. In contrast, the other study had a positive and significant relationship.
Although the relation between SS and PA was evident, as well as SS and ST, it can be seen how, in the two models obtained, family support had a greater relation with the latent variable SS than friends, which was positive and significant. The type of relationship is in line with the results of previous studies but with neither the degree nor the predominance. Haidar et al. [60] found that the SS of friends was a better predictor of moderate PA, vigorous PA and PA than the SS of parents.
Regardless of the type of support, the predictions were significant. The same was true for ST prediction but not significantly so. Lisbon and colleagues [15] obtained similar results to those of the study by Haidar et al. In this study, the structural equation model showed a greater relationship of PA with SS from friends than from family. In relation to the rural Chinese youth population [61], support from families was higher than from friends. However, with respect to relationships with PA, support from friends was more related to exercise intensity and time but not to frequency. Regardless of the variables, all relationships were positive and significant.
Perhaps the difference in the predominance of the agent that most influences SS is due to the fact that in the other studies the relationships between PA and SS were analysed without considering the influence of the variable ST in this relationship. It could also be that there were other variables not considered in any of the studies that influenced these relationships. This difference could even be due to the fact that, in the present study, SS was calculated as the mean score of the items instead of considering them independently.
In this study with a population from Soria, different relationships were obtained between ST and PA. Time in LPA was positively, slightly and significantly related to ST. In contrast, the relationship with time in MVPA was negative, slight and non-significant. On the other hand, a similar relationship was observed as a function of EE. Although the relationship between MVPA-EE and ST was slightly higher, both MVPA-EE and ST were negative, slight and significant. These differences in the relationships between time and EE may be due to the fact that the original subcategories of the questionnaire "very light PA" and "very vigorous PA" are the most negatively related to ST and, therefore, the ones that most condition the newly created categories LPA and MVPA.
These results are similar to those found in previous studies. O'Brein et al. [24] also found negative, but non-significant, relationships between MVPA and overall TS in a sample of Irish adolescents as a function of gender. Furthermore, these relationships were similar in moderate PA and vigorous PA. Braig et al. [27] found that the leisure time PA of 13-year-old adolescents was negatively related to different types of screen activities, with the exception of TV viewing, which was positively related.
These relationships were non-significant, irrespective of the sex of the young people. Costigan et al. [59] also found that the majority of studies (60%) selected in their systematic review demonstrated a negative association between PA/fitness and screen-based sedentary behaviour. McVeigh and Meiring [62] found that, in a sample of South African adolescents, PA decreased with age while screen time increased, although no relationship was found between the two variables.
In this study, SS is a determinant of models, including LPA, MVPA and ST variables. Furthermore, SS was shown to be a mediator between physical exercise and social anxiety [61], and thus it could be a mediator of other factors as well.
In the following, the limitations of the study will be discussed. Although the results are generalisable to the adolescent population of the region of Soria, they cannot be extrapolated to the adolescent population worldwide. As shown, the determinants that condition PA vary according to the characteristics of the population, both in type and intensity. Another limitation is due to the use of the FBODPAQ. This instrument asks about the previous day's physical activity and was administered over four days.
In addition, several of the studies cited in this paper used questionnaires that asked for PA over the last seven days. This makes it easier to compare the PA levels more objectively. Thirdly, ST was calculated as the sum of FBODPAQ item scores (TV viewing and use of "computer, video games and internet"). This implies that activities, such as mobile phone and tablet use, were not considered.
There is the limitation that there could be a variable not considered in the study that has a direct influence on those considered and that could modify the contrasted models.
Finally, future lines of research will be proposed. It would be interesting to conduct similar studies with adolescents from other cities and countries, in order to be able to compare the results. It would also be possible to extend the age of the participants and analyse the evolution in a longitudinal study. Knowing how each type of SS of the different agents influences similar models would be useful for adopting more effective measures to promote PA.
Conclusions
The first theoretical model presented is useful for explaining the relationships between the time of LPA, EE of LPA, ST and SS of adolescents in the region of Soria. This was also corroborated for the model relating to MVPA. This is because, in both cases, the general equations include parameters with acceptable values.
SS was a determining factor in both explanatory models. It was negatively related to LPA time and ST in the first model and ST in the model relative to MVPA. In addition, SS was related to time in MVPA, but positively and, unlike the rest of the previous links, it was not significantly related to time in MVPA.
LPA practice time was positively related to ST. On the other hand, time spent in MVPA was negatively related, although in this case, it was not significantly so. The family was the agent that most influenced the mean levels of SS, regardless of the PA intensity model. The difference between relationships was greater in the LPA model.
|
2022-08-20T15:10:41.498Z
|
2022-08-01T00:00:00.000
|
{
"year": 2022,
"sha1": "4037e2ff7f8608bb8a99d0525fecfa762f389b10",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1660-4601/19/16/10222/pdf?version=1660739051",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "9cc14f934bc8a6e704fcaa0343b6501131a979b6",
"s2fieldsofstudy": [
"Medicine",
"Sociology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
1055805
|
pes2o/s2orc
|
v3-fos-license
|
Prevention of Alcohol-Related Crime and Trauma (PACT): brief interventions in routine care pathway – a study protocol
Background Globally, alcohol-related injuries cause millions of deaths and huge economic loss each year . The incidence of facial (jawbone) fractures in the Northern Territory of Australia is second only to Greenland, due to a strong involvement of alcohol in its aetiology, and high levels of alcohol consumption. The highest incidences of alcohol-related trauma in the Territory are observed amongst patients in the Maxillofacial Surgery Unit of the Royal Darwin Hospital. Accordingly, this project aims to introduce screening and brief interventions into this unit, with the aims of changing health service provider practice, improving access to care, and improving patient outcomes. Methods Establishment of Project Governance: The project governance team includes a project manager, project leader, an Indigenous Reference Group (IRG) and an Expert Reference Group (ERG). Development of a best practice pathway: PACT project researchers collaborate with clinical staff to develop a best practice pathway suited to the setting of the surgical unit. The pathway provides clear guidelines for screening, assessment, intervention and referral. Implementation: The developed pathway is introduced to the unit through staff training workshops and associate resources and adapted in response to staff feedback. Evaluation: File audits, post workshop questionnaires and semi-structured interviews are administered. Discussion This project allows direct transfer of research findings into clinical practice and can inform future hospital-based injury prevention strategies.
Background
Half of alcohol-attributable deaths are a result of injury worldwide [1]. Alcohol-related harm is a major cause of mortality and morbidity in Australia, causing around 3,000 deaths and 65,000 hospitalisations every year [2]. Alcohol-related injuries are more commonly caused by heavy drinking than by people with severe alcohol dependence, and strong links exist between alcohol-related trauma, crime and binge drinking [3].
The most recent national survey of drug use estimates that one in five Australians drink at a level that puts them at risk of short-term harm at least once a month [4], and rates in its Northern Territory (NT) are especially high. Indigenous Australians who constitute a substantial proportion of residents of NT (32%) and they are six times more likely to drink at high-risk levels than non-Indigenous Australians [5]. In consequence, Indigenous Australians are also at greater risk of alcohol-related harms than other Australians. Harms associated with high-risk alcohol consumption in Indigenous Australians include family conflict, domestic violence and assaults [6,7], and alcohol is the leading cause of injury among Indigenous Australians, followed by intimate partner violence [8]. In fact, rates of death from exclusively alcohol-related conditions are almost eight times greater for Australian Indigenous males than for non-Indigenous males and 16 times greater for Indigenous females than for non-Indigenous females among residents of Western Australia, South Australia and Northern Territory [9]. The percentage of alcoholrelated deaths among young Indigenous Australians aged 15-24 is almost three times higher than for their non-Indigenous counterparts [9].
Alcohol-related violence is the most common cause of hospital admission for injury in the NT [10], accounting for 38% of the total injury admissions for Indigenous people. Further, it has been reported that most of the assaults against women in remote NT communities are perpetrated by a drunken husband or other family member. Alcohol-related facial trauma is common, with an estimated 350 cases per year admitted to the Maxillofacial Surgery Unit of the Royal Darwin Hospital (RDH) [11,12]. Many of these admissions, approximately 80% are of Indigenous people [12]. Therefore, there is an urgent need for an effective and culturally appropriate intervention to address binge drinking and alcohol-related harm in this group. In the general population, screening and brief counselling can reduce high-risk alcohol consumption and alcohol-related assaults associated with binge drinking, but more research is needed on alcohol-related trauma among the NT Indigenous people. 'Motivational care planning' (MCP) is a broad-based motivational intervention to improve health and wellbeing of Indigenous Australians. In early research, it demonstrated acceptability and an ability to engage participants [13], and it resulted in significant improvements in wellbeing, substance use and self-management [13,14]. This project adapts and applies MCP to inpatients with alcohol-related facial injuries.
Aim
The PACT project aims to introduce routine screening and brief intervention by staff of the Maxillofacial Surgery Unit at Royal Darwin Hospital (RDH), in order to raise awareness of at-risk drinking and prevent recurrent injury in inpatients with alcohol-related facial injuries.
This project aims to answer the question Will introduction of screening and brief interventions change health service provider practice and reduce alcoholrelated injuries secondary to assault?
We predict that a participatory action approach to implementing a best practice pathway to referral and treatment for high-risk alcohol users admitted to the Maxillofacial Surgical Unit with injuries will change health service provider practice and reduce alcohol-related injury.
Research plan
This 18-month project introduces screening and brief interventions for high-risk drinkers admitted to hospital with facial trauma and evaluates the implementation of a best practice pathway. The project transfers skills and resources to hospital staff to support delivery of best practice and to evaluate progress through continuous quality improvement strategies.
Establishment of Project Governance
An Indigenous Reference Group and an Expert Reference Group oversee the project. The IRG comprises senior urban-and community-based Australian Indigenous people and is established through Menzies School of Health Research. The ERG includes senior representatives from the Maxillofacial Surgery Unit and the NT Department of Health. The research team consists of a project leader and project manager from Menzies School of Health Research, senior representatives from the Maxillofacial Surgery Unit and Indigenous researchers at Menzies. The day-to-day management of the project is the responsibility of the project leader and project manager, in collaboration with the Indigenous research officers. The research team and ERG formally meet by teleconference or face-to-face every three to six months.
Development of best practice pathway
A tailored best practice pathway suited to the setting of the surgical unit is developed in collaboration with hospital staff, following exploration of current systems ( Figure 1). The pathway includes four key activities: (1) Brief screening of all admissions, (2) An information booklet for those at risk, (3) Referral to appropriate services for those at risk, (4) Delivery of a culturally-adapted brief intervention. Staff are trained to apply the best practice pathway to all patients.
All relevant services in Darwin are contacted via email and phone, informed of the project aims and invited to participate. Engagement of community services includes visiting the organisation, understanding specific referral processes and exploring the capacity of external services to provide in-house services for potential clients in the hospital. Representatives from the various agencies present at PACT workshops to inform and educate hospital staff about available services for patients with substance abuse problems and/or wellbeing concerns. A pamphlet is developed containing contact information and a brief synopsis of Alcohol and Other Drugs (AOD), mental health and domestic violence services both within and outside the hospital system Implementation of best practice pathway Implementation is multi-faceted, and includes information and consultation meetings with staff and community service providers, staff training workshops, key informant interviews, feedback sessions and the introduction of relevant resources (available online and in hard copy form).
The implementation of the best practice pathway includes three key activities: (1)
Information and consultation meetings with staff and community service providers
Consultation meetings are conducted with hospital staff and community service providers to inform the development of best practice protocols for detection and treatment of patients who demonstrate high-risk drinking. These meetings aim to improve understanding of the current strategies and practical issues surrounding implementation in the hospital setting.
Staff training workshops
Up to six PACT training workshops, involving up to 50 hospital-based service providers, are conducted. The workshops provide information about screening, referral services and brief interventions. A post-workshop participant evaluation questionnaire is collected and results analysed. The questionnaire incorporates ordinal scales and open-ended questions. Participants are asked about knowledge and confidence in screening, brief intervention and referral for at-risk drinkers. Knowledge and confidence are rated on a scale from 1 (not much /not confident) to 9 (a lot /very confident). Participants are also asked to rate how interesting and useful the workshop was on a scale from 1 (not at all) to 4 (very). The questionnaire includes a section for attendees to comment on their experience and state whether they would change their practice as a result of the workshop. A joint feedback workshop in the last three months of the study will report on the key findings from staff training activities and file audits over the course of the project.
Resource development
Brief intervention resources and a best practice protocol manual are prepared. Tools for ongoing continuous quality improvement are made available to the surgery unit.
Feedback
Feedback of interview responses by key informants and the results of the file audits allow opportunity for refinement of the care pathway, goal setting, further training and dissemination. Project findings will be presented at relevant conferences within Australia and results are to be published in appropriate scientific journals, particularly those that target hospital care of high-risk drinkers.
Evaluation of the best practice pathway process Process evaluation
The process activities include evaluating: (1) number of workshops held and workshop content and format, staff trained, (2) number and type of staff attending workshops, and (3) number and type of training and education resources developed.
Outcome evaluation
Outcome evaluation activities include 1) file audits, and 2) semi structured interviews.
File audits
The project team conducts two file audits: one at baseline and one at 9 months, and records the number of admissions related to high-risk drinking, screenings, patients flagged to be 'at-risk' , brief interventions delivered, information distributed and referrals completed. The review of files allows the project to monitor outcomes. This information is essential for review and feedback for quality improvement and development of care processes.
Key informant interviews
The project team conducts a small number of key informant interviews to explore client, family and service provider perspectives on the process. Five key informant interviews with surgical unit staff assess confidence and knowledge as well as challenges and enablers to screening and best practice. Five key informant interviews with patients and families explore their experience of the best practice pathway.
Selection criteria
There are two target populations: (1) Service providers who care for clients admitted to the Maxillofacial Surgical Unit, and (2) Patients or Clients admitted to Royal Darwin Hospital with facial trauma during the study. Patient participants will need to be at least 18 and able to give informed consent.
Statistical analysis and sample size
Analysis of file audits and interviews will include descriptive statistics and qualitative data grouped and analysed by theme.
There are two samples: files to be audited and individuals to be interviewed.
1. The files to be audited will include a sample of trauma patients admitted to the Maxillofacial Surgery Unit during the six months prior to commencement of the study and the 9 months of the study from baseline (estimated 160 files). Files are examined for frequency of screening, recorded evidence of brief interventions given for those at risk and documentation of uptake of the new pathway. They will also be examined for client outcomes in terms of wellbeing, alcohol-related medical problems and high-risk drinking. 2. A small sample of client participants and service providers will be interviewed to assess establishment of the new pathway within routine care. We will purposely sample five clients and five service providers to explore enablers and challenges and the client experience. We have chosen this sample size in order to be able to gain some insight into these client and service provider perspectives whilst keeping within the resources and brief time frames of the study.
Ethical approvals
The study has been granted full ethics approval by the Human Research Ethics Committee of Department of Health and Menzies School of Health Research (HREC-11-1553). Data are accessible to the investigators and support investigation team only. In the audit forms, we will not record identifiable client information such as client's names or registration numbers. Instead, a code will be used as identifier. This enables checking of data during the cleaning of audit data where necessary. Codes linked with client's names will be retained by the research team and a copy stored electronically with the rest of the data at Menzies in a separate file accessed only by password.
Engagement with stakeholders
This project is a partnership between the RDH Maxillofacial Surgery Unit, the Alcohol and Other Drug (AOD) program NT wide, the remote AOD Workforce Program and Menzies School of Health Research. The AOD Workforce Program operates within a number of Aboriginalcontrolled and government health centres in urban and remote settings across the NT. The NT Department of Health AOD program, the remote AOD workforce and the RDH surgery unit are key supporting partners. The AOD program assisted in the development of this project outline and is committed to its success. The surgery unit of RDH proposed the project and strongly supports the project's aims.
Benefits
The project will implement and evaluate strategies for screening and intervention for reducing the harms associated with alcohol consumption in Indigenous patients in the NT in line with the Aboriginal and Torres Strait Islander Peoples Complementary Action Plan 2003-2009. The main objectives of the action plan are control of supply, management of demand, reduction of harm, early intervention and treatment.
The value of the study includes direct benefit to participants through improved wellbeing, decreased recurrence of injury and less substance misuse. The study benefits the service providers who care for high-risk drinkers who sustain injury by allowing them to provide timely advice and intervention.
|
2017-06-25T17:23:07.257Z
|
2013-01-18T00:00:00.000
|
{
"year": 2013,
"sha1": "c2e29956c6875d87c4a19859a8f2b72a53946b3a",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1186/1471-2458-13-49",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "768119b13fda0e334a4dbea2c0f1be63f8464d9e",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
106410048
|
pes2o/s2orc
|
v3-fos-license
|
First European leaf-feeding grape phylloxera (Daktulosphaira vitifoliae Fitch) survey in Swiss and German commercial vineyards
Recent observations report the worldwide incidence of leaf-feeding grape phylloxera in formerly resistant scions of commercial vineyards. To analyze the genetic structure of leaf-feeding phylloxera, we performed an extensive sampling of leaf-feeding phylloxera populations in seven regions (“cantons”) in Switzerland and Germany. The use of polymorphic microsatellite markers revealed presence of 203 unique grape phylloxera multilocus genotypes. Genetic structure analyses showed a high genetic similitude of these European samples with phylloxera samples from its native habitat on Vitis riparia (northeastern America). Nevertheless, no genetic structure within the European samples was observed, and neither host, geography nor sampling date factors caused clear effects on phylloxera genetic stratification. Clonality was high in commercial vineyards and leaf-feeding grape phylloxera strains were found to be present in scion leaves and rootstock roots in the same vineyard, potentially indicating migration between both habitats. We found indications of sexual reproduction, as shown by high degrees of genetic variation among collection sites.
Introduction
Grape phylloxera (Daktulosphaira vitifoliae Fitch) is among the most important viticultural pests. The roots of the European grapevine (Vitis vinifera L.) are highly susceptible to this insect, which caused the devastation of many own-rooted vineyards when introduced into Europe in the nineteenth century. Farmers managed the pest by the subsequent use of partially-resistant American non-vinifera Vitis species or hybrids, as rootstocks. As an introduced pest, grape phylloxera can be commonly found nowadays in commercial vineyards, both feeding on roots of partially resistant rootstocks and leaves of partially resistant grapevine cultivars (Powell et al. 2013;Griesser et al. 2015). Recent reports show the incidence of leaf-feeding phylloxera on leaves of both partially resistant and formerly resistant vines (Bao et al. 2014;Fahrentrapp et al. 2015;Forneck et al. 2016). The vitality and lifespan of vineyards infested by root-feeding phylloxera (later on termed Bphylloxerated^) depend on many factors, including rootstock origin, phylloxera biotype and population density, soil type, vineyard management practices and diverse abiotic and biotic stress factors (see Powell et al. 2013). Grape phylloxera can generate large economic losses to vine growers (Folwell et al. 2001), which emphasizes the need to identify and monitor strain diversity and seasonal population changes in commercial vineyards. Population genetic studies can provide insights into the evolution of reproduction modes, adaptive strategies of aphid species in agroecosystems, and the influence of environmental and anthropogenic factors on the genetic diversity and structure of aphid populations (Dixon 1977). Population genetic studies also facilitate the design and optimization of sustainable pest management strategies, such as effectively assessing and controlling grape phylloxera infestation levels (Benheim et al. 2012). They also play a key role in quarantine strategies (Clarke et al. 2017) and in the determination of vineyard value (Benheim et al. 2012).
Leaf-feeding phylloxera biotypes may cause damage in commercial vineyards, especially if phylloxera abundance is high early in the season. The phylloxera life cycle exists in many variants (reviewed in Forneck and Huber 2009). The most common life cycle in Europe begins in springtime with either the fundatrix hatching from an overwintering egg (holocycle) or by the first instar larva (hibernales) migrating from roots to leaves and inducing galls. During the grapevine vegetative cycle, both leaf-and root-feeding larvae reproduce asexually and reach up to 4-5 generations per season (Forneck et al. 2001). Toward the end of the season, alate nymphs produce sexual adults (sexuales) which mate, and a single egg (the overwintering egg) is laid by the female. Grape phylloxera feeds on partially-resistant rootstocks, establishing an infestation level that may affect the vigor and longevity of the vine, but rarely results in plant death (Benheim et al. 2012). Nevertheless, fatal plant effects have been occasionally observed in the case of infestation by phylloxera-devastating 'superclones', as found for 'superclones' G1 and G4 in Australian vineyards (Corrie et al. 2002). Phylloxerated vineyards may produce losses if only partially resistant rootstocks are chosen but, in general, root-feeding phylloxera is successfully managed by appropriate rootstock selection.
In regions where interspecific grape hybrids (V. vinifera x American Vitis species) are traditionally grown for either conventional or organic wine production (like Léon Millot or Maréchal Foch hybrids), high infestation rates on the leaves are frequently observed (Fahrentrapp et al. 2015;Jubb 1976). Prevailing reports also indicate a heavy incidence of leaf-galling phylloxera on V. vinifera cultivars, in an increasing random frequency throughout diverse winemaking regions worldwide. In this sense, phylloxeration has been observed in the canopies of some V. vinifera cultivars (e.g. Riesling, Chasselas, Chardonnay, Müller Thurgau, Cabernet Sauvignon and Viognier) in Germany (e.g. Forneck et al. 2017a), Switzerland (Fahrentrapp et al. 2015), Austria (Könnecke et al. 2010), Uruguay, Brazil, Peru (Vidart et al. 2013) and Australia (Powell, K.S. Pers. Comm.).
The reasons for the increasing infestation rates on leaves of V. vinifera cultivars and interspecific hybrids are unknown. Environmental factors, related to climate change conditions, as well as changes in vineyard management practices, have been discussed (Powell et al. 2003). The general decline of pesticide use in grape production, and changes towards intensive leaf management practices may provide a more favorable environment for leaf galling. Elevated soil temperatures and the lack of strong winter frost events may increase the survival of hibernating phylloxera instars leading to more abundant spring population densities. In years with early bud-break, the first reproducing generation of phylloxera have been observed by April in Austria (Forneck et al. 2017b pers. observation). As the number of generations increase, the population size expands both on leaves and roots, allowing establishment of multi-annual grape phylloxera populations in commercial vineyards. Phylloxera populations capable of feeding on rootstock roots and scion leaves can be classified in a series of defined biotypes (A-G), based on phylloxera -host plant interaction (Forneck et al. 2016). To date it is unclear whether these populations rise from hybridization, mutation, or if they are the result of new introductions. Although the actual effects of leaf infestation in commercial vineyards have not been systematically analyzed yet, it is likely to cause long-term economic losses to grape growers by reducing crop yield or through potential negative effects on wine quality.
This study aims to evaluate, for the first time, the genetic structure of leaf-feeding grape phylloxera populations in commercial vineyards of Central Europe by extensively sampling phylloxera populations in diverse regions (Bcantons^) of Switzerland and Germany. Here, we tested if there was a link between phylloxera populations and plant host and/or edafo-climatic conditions in the vineyards. Results also provide novel information on phylloxera mode of reproduction. Finally, we add further evidence on the origin of the grape phylloxera population present in Central Europe by comparing our genotypes with previously reported strains from natural and introduced ranges.
Material and methods
European grape phylloxera sampling in commercial vineyards Leaf and soil emerged D. vitifoliae samples were collected as described previously (Fahrentrapp et al. 2015). In brief, commercial vineyards from 29 sampling sites throughout Switzerland and Germany wine-producing regions were selected ( Fig. 1 and Table 1). Leaf gall samples were collected in 2013 and 2015 from May to October by detaching whole leaves from vineyard canopies and storing them at −20°C until further analyses. Based on Powell et al. (2009), we used emergence traps for soil-emerging phylloxera collection, which were removed from soil after 2-4 weeks and rinsed with ethanol (70%). D. vitifoliae individuals were then collected and stored in ethanol at 2°C until further processing. In total, 335 individuals were collected and considered in this study.
European grape phylloxera genotyping Adult phylloxera individuals were individually finely ground with sterile plastic pestles in 200 μl of 5% Chelex BT 100 (BioRad, USA) solution for genomic DNA extraction. Ground samples were incubated at 90°C for 20 min with frequent mixing, then thoroughly vortexed, centrifuged for 10 min, and 100 μl of the supernatant was transferred into a new tube. The Chelex BT-based extract was mixed with 10 μl of 3 M sodium acetate solution and 100 μl of isopropanol, and kept overnight at −20°C for DNA precipitation. Samples were centrifuged at 500×g for 5 min, and DNA pellets were washed with 70% ethanol, followed by the addition of 100 μl of 1X TE buffer (Forneck et al. 2017a). A set of seven highly polymorphic SSR markers (Phy_III_55, Phy_III_30, Phy_III_36, Dvit6, DV4, DV8 and DVSSR4 (Forneck et al. 2017a)) were selected and analyzed in the 335 phylloxera samples. Polymerase chain reaction (PCR), separation of fragments and allele calling was performed following the procedures detailed in Riaz et al. (2017) and Forneck et al. (2017a).
In every set of samples, six control genotypes were included to keep allele calling (Forneck et al. 2017a). The analysis revealed the presence of 203 unique grape phylloxera multilocus genotypes (MLGs).
Native grape phylloxera data Available SSR data from 502 grape phylloxera MLGs were obtained from Lund et al. (2017) for comparison with international data. This dataset mainly corresponds to samples collected from grape phylloxera's native range (USA), but also includes some samples from introduced ranges of Argentina (2), Brazil (3), California (5), Peru (3), Uruguay (4), Austria (7) and Hungary (8). Within the set of markers used to identify such MLGs, and to allow a joint analysis with the European samples, we focused on the analysis of 4 SSRs (Phy_III_55, Phy_III_30, Phy_III_36, Dvit6).
Data analysis
Phylloxera population structure was analyzed using the model-based clustering method implemented in (Pritchard et al. 2000) in the whole set of unique European and American phylloxera MLGs. As stated above, this method was run on the basis of four SSR markers (Phy_III_55, Phy_III_30, Phy_III_36 and Dvit6) genotyped in both datasets, and assuming an admixture model with uncorrelated allele frequencies.
The model was tested in a number of hypothetical genetic groups (K) ranging from 1 to 15, and each run was replicated 10 times to assess the consistency of the results, using a cycle of 250.000 burn-in steps followed by 500.000 Markov Chain Monte Carlo iterations. The most probable number of genetic groups was assessed following the ΔK criteria (Evanno et al. 2005) using STRUC-TURE HARVESTER (Earl 2012). Phylloxera MLGs were assigned to a genetic group considering a membership coefficient over 0.95; otherwise, genotypes were considered as Badmixed^. The same procedure was applied to the set of European MLGs, but using the seven SSR markers previously listed. In parallel, a principal component analysis was performed by means of the DARwin software (Perrier and Jacquemoud-Collet 2006). Allele frequencies, mean number of alleles per locus, observed heterozygosity (Ho) and unbiased estimates of heterozygosity expected under Hardy-Weinberg assumptions (He) were calculated as previously indicated . Clonal diversity (k) within populations was calculated for each population as k = G/N, where G is the number of different multilocus genotypes present in the sample and N is the sample size. P sex values were calculated with geneClone 2.0 software for every multicopy genotype in each population. Thresholds for P sex values were estimated for each population from Monte Carlo simulations. Significant P sex values indicate that multicopy genotypes are statistically overrepresented in a population and therefore, they are probably the result of clonal amplification. F is values were included as a measure of inbreeding.
Results
Genetic structure: European vs. native grape phylloxera range To generate a grouping according to descent, the 335 European samples obtained in this work were combined with a set of 502 phylloxera genotypes (470 native American genotypes and 32 phylloxera genotypes collected from various habitats in the introduced regions of Argentina (2), Austria (7), Brazil (3), California (5), Hungary (8), Peru (3), Uruguay (4)), previously analyzed in Lund et al. (2017). As a result, a global dataset of 837 phylloxera samples was created containing genetic information at four SSR loci. STRUCTURE analysis and ΔK criteria clearly suggested the most probable existence of two genetic groups (k 1 and k 2 ) within the 837 phylloxera samples analyzed. The majority (717 samples) were associated with one of two genetic groups (membership coefficient over 0.95), whereas 120 individuals were identified as Badmixed^. According to this clustering, 99.7% of the individuals assigned to k 1 were from North America (native habitat) from Vitis species like V. arizonica, V. vulpina, V. cinerea or V. labrusca, whereas the other genetic group (k 2 ) was composed by the great majority of samples from the European commercial vineyards genotyped in this study (99.1%), samples from V. riparian host plants of the northeastern native range (sampled in Arizona, Indiana, Maine, New York, and Pennsylvania states), and most of the phylloxera genotypes sampled in diverse introduced regions (Fig. 2). This general grouping was confirmed by PCA results, where these two main genetic groups could be easily differentiated (Fig. 3).
Genetic diversity of European leaf-feeding grape phylloxera populations in commercial vineyards
After this general analysis, we focused on the detailed analysis of the 335 European grape phylloxera individuals obtained from commercial vineyards, which were genotyped by the use of seven SSR markers. Our aim was to analyze the genetic structure of these samples considering their geographical origin, host plant and feeding site (leaf/root), as shown in Table 1. In contrast to the strong stratification observed among native American vs. V. riparian + European samples, no population structure was observed when analyzing the European samples within the large range of the 15 hypothetical genetic groups tested, and no effect of geographical origin, feeding site (leaf/root) or host plant factors were identified. The latter result is consistent with those previously reported by Forneck et al. (2000) and Yvon and Peros (2003), who did not detect any significant effect of the host on phylloxera genotypes grouping.
Genetic data revealed high genetic diversity, yet most parameters indicate clonal propagation of the analyzed populations ( Table 2). The average number of alleles per locus in the Swiss and German phylloxera populations ranged from 3.00 (Tessin) to 5.13 (Zürich). The observed heterozygosity of each subset of samples (H o ) ranged from 0.45 to 0.58, while the expected heterozygosity values (H e ) ranged from 0.50 to 0.58. For 9 of the 11 sites, H e was greater than Ho. Tests of the Hardy-Weinberg equilibrium showed the presence of an excess of heterozygosity (F is < 0) in some of the populations. The F is values observed in the commercial vineyard populations are less (negatively) consistent within populations than those previously reported in studies performed in Bsemi-native habitats^, including abandoned rootstock areas with extensive leaf galling populations, research grapevine collections or rootstock nurseries Vorwerk and Forneck 2007). MLGs were observed in each sampling region (Supplement Table 1), indicating anholocyclic reproduction in commercial vineyards on both leaves and roots. The probability of independently produced repeated genotypes by sexual reproduction was determined through the calculation of P sex values in MLG simulations, obtaining values generally considered as low (Table 2). Fig. 2 Population structure of the 837 grape phylloxera samples included in this study based on STRUCTURE results. In A, every individual is shown as a vertical line, whose color indicates its origin: native range (yellow), Germany and Switzerland (brown) and introduced range (white) In B, individuals are also graphically represented by a vertical line, divided in colored segments according to the proportion of estimated membership in k 1 (blue) and k 2 (red). The optimal number of genetic groups (K = 2, k 1 and k 2 ) was established according to ΔK criteria. Accordingly, 305 and 412 samples are assigned to k 1 and k 2 , respectively. Individuals from the native and introduced ranges were obtained from Lund et al. (2017)
Fig. 3 Principal component analysis of American and
European grape phylloxera samples. The variance explained by the first two factors is indicated (%). Samples attributed to k 1 are indicated in blue, and to k 2 in red.
Admixed individuals are indicated in gray
The individuals sampled in the Zürich region formed the largest population in our work (N = 110), and its analysis provided arguments (positive F is values, highest allele frequencies, lowest P sex values) to support the existence of sexual reproduction events.
MLGs of leaf-vs. root-feeding populations
To elucidate the origin of the leaf-feeding grape phylloxera populations, we performed a detailed population genetic analysis on the MLGs identified therein. Sampling-site specific MLGs were detected in all the vineyards analyzed, and the list of the 28 MLGs with more than four repeats (out of five samples) are depicted in Supplement Table 1. Considering the whole set of individuals sampled (335), 40.3% (135 individuals) belonged to such MLGs.
To assess the existence of migrating MLGs among feeding habitats (leaves and roots) within vineyards, we focused on the analysis of samples from Zürich and Aargau, as these regions have been extensively studied (110 and 69 samples in Zürich and Aargau, respectively). MLGs 1, 2, 9 and 10 were found in both roots and leaves on grafted interspecific vines (Supplement Table 1). Other MLGs (15 and 16, or 4 and 5) were found to co-exist in the same vineyard, but they were found to feed separately on plant roots (MLGs 4 and 16) or leaves (MLGs 5 and 15). No migration was found between neighboring rootstock leaf habitats and either root-or leaf-feeding vineyards. For the first time, we report phylloxera MLGs migrating from root to leaf (or vice versa), with several examples in some Swiss commercial vineyards. For example, we found a commercial vineyard on Léon-Millot/unknown rootstock in Oberflachs (Aargau, sampling site AG40) with a high level of infestation (R = 18 vs. L = 5) in which a phylloxera MLG (MLG13) was present in both roots (4 samples) and leaves (5 samples). Similarly, in a Maréchal Foch/125AA (R = 5, L = 5) vineyard in Regensberg (ZH45, Zürich) we found that MLG26 was present in roots (4 samples) and leaves (4 samples). A third MLG (MLG44) was found both in the leaves (4 samples) and roots (1 sample) of a Léon Millot/ unknown rootstock vineyard from Regensberg (ZH44).
Discussion
The likely patterns of the introductions of phylloxera into worldwide viticulture regions from its native habitat have L and R indicates leaves and roots, respectively been extensively analyzed by different authors using diverse genetic markers, including mitochondrial (Downie 2002) and nuclear polymorphisms (Forneck et al. 2000;Lund et al. 2017;Riaz et al. 2017). Whereas phylloxera populations from Vitis species like V. vulpina and V. arizonica are suggested to be the more likely source of introductions into viticulture regions like California (USA), South Africa, Australia, New Zealand and South America (Arancibia et al. 2018;Downie 2002;Lund et al. 2017), there is a general agreement that all European phylloxera introductions are likely to come from northeastern American populations, where V. riparia dominates (Downie 2002). Our results confirm such findings, in which most of the European genotypes from Swiss and German commercial vineyards are grouped together with phylloxera samples from the northeastern native range from V. riparia plants.
Aphid populations are composed of a small number of high frequency clones, and many low frequency (rare) genotypes (Harrison and Mondor 2011). In a recent report, Bao et al. (2014) have evaluated phylloxera genetic diversity in Uruguay through the screening of 75 leaf-feeding phylloxera from thirteen different regions (including seminative habitats, nurseries and commercial vineyards), which were genotyped at four SSR loci. Similarly, Forneck et al. (2015) evaluated the genetic diversity of 315 leaffeeding D. vitifoliae samples from semi-natural habitats throughout Austrian viticulture regions. Both studies show high degrees of genetic diversity, and population genetic parameters showed the predominant occurrence of asexual reproduction. In addition, none of these two works report the existence of phylloxera 'superclones' (MLGs with an outstanding capacity to predominate in a specific region and persist on time). This was expected, since the habitats chosen for these works were either semi-natural habitats or a mixture of nurseries, commercial vineyards or natural habitats (Bao et al. 2014), and thus not comparable to the suggested 'superclone' habitat (Vorburger et al. 2003). Here as well, no dominating MLG has been found (Supplement Table 1). As a result, no phylloxera leaf-feeding 'superclone' candidate has been identified within the Swiss and German commercial vineyards analyzed. Migration of phylloxera (based on genotypes) between locations has rarely been shown in semi-natural habitats in South America, Asia and Europe (e.g. Bao et al. 2014;Forneck et al. 2015;Sun et al. 2009;Vorwerk and Forneck 2007), and our results confirm these findings. No MLGs were sampled in multiple vineyards or regions, with the exception of MLG7, which was found in vineyards of Quinten (Graubünden) and Malvaglia (Tessin). Further studies aimed to analyze the local migration of phylloxera MLGs between adjacent plots by viticultural machinery and/or wind drift should be done to add evidence in short-distance dispersal mechanisms.
Previous population studies on D. vitifoliae in Europe showed two distinct genetic groups that correlated with their geographical location (northern and southern Europe), suggesting that selective forces could have favored the development of different phylloxera strains adapted to specific edafo-climatic conditions in northern and southern Europe (Forneck et al. 2000). The northern European group shows higher genetic diversity, and shows similitude with phylloxera genotypes from northeastern native habitats (Downie 2002). On the other hand, the southern European group has not been clustered to phylloxera genotypes sampled from a specific habitat. Here, all the sampling was performed above parallel 43°, in regions often referred to as the northern group (Forneck et al. 2000). As a result, we did not find any significant subgrouping among the European samples from commercial vineyard habitats, possibly due to the homogeneity of the environmental conditions of the regions sampled. Nevertheless, additional bottlenecks coming from planting strategies and distribution of (symptomless but infested) plant material as well as other further yet unknown factors may have affected phylloxera population structure.
Early introductions, together with rare sexual events, viticultural practices (plantings) or human-mediated transportation were responsible for the dissemination of phylloxera populations in the late nineteenth century in Europe, which lead to the adoption of new cultural practices in commercial vineyards (grafting). In recent years, potentially driven by diverse anthropogenic pressures (including global climate change effects and vineyard management systems), have caused an increment in the number of D. vitifoliae populations feeding on the leaves of grapevine scions. Here, we observed a high degree of genetic diversity within D. vitifoliae populations in commercial vineyards and we report that leaf-feeding D. vitifoliae in commercial vineyards can migrate from rootstock roots to scion leaves (and/or vice versa), establishing specific MLGs in each vineyard. Nonetheless, a dominating MLG (or 'superclone') has not been identified throughout the commercial vineyards from Switzerland and Germany analyzed in this work. No genetic structure was found within the European vineyards analyzed, and no conclusions regarding the impact of host plant (rootstock or scion), spatial range or elevation on phylloxera stratification could be made. This may be due to the limited set of samples, or the limited number of SSR markers used, which might have hampered the detection of effective alleles.
|
2019-04-11T14:20:46.470Z
|
2019-04-10T00:00:00.000
|
{
"year": 2019,
"sha1": "2dc8d22165cbf6a5349919652525be1a1e6983e7",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s10658-019-01723-0.pdf",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "83384a4219737a434775aaad495b63200b9b5546",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Biology"
]
}
|
252507984
|
pes2o/s2orc
|
v3-fos-license
|
Mass Purification Protocol for Drosophila melanogaster Wing Imaginal Discs: An Alternative to Dissection to Obtain Large Numbers of Disc Cells
Simple Summary Drosophila melanogaster, also known as the fruit fly, is a widely used organism model, especially for genetic studies or as a model for pathologies. The Drosophila genome is well known and conserved within humans, thus allowing biologists to obtain numerous mutants and transgenic flies from the fruit flies. Gene function studies at the cellular and molecular levels are often performed using extracts of larval tissues. Due to their small size, it is difficult to dissect substantial amounts of these tissues for performing genomic or proteomic experiments. This paper develops a simple method to purify larval tissues en masse. This protocol preserves tissue integrity in the same way as manual dissection; the protocol is achievable by individual researchers and allows the purification of different samples simultaneously. Abstract Drosophila melanogaster imaginal discs are larval internal structures that become the external organs of the adult. They have been used to study numerous developmental processes for more than fifty years. Dissecting these imaginal discs for collection is challenging, as the size of third-instar larvae organs is typically less than 1 mm. Certain experimental applications of the organs require many cells, which requires researchers to spend several hours dissecting them. This paper proposes an alternative to dissection in the form of a mass enrichment protocol. The protocol enables the recovery of many wing imaginal discs by grinding large quantities of third-instar larvae and separating the organs using filtration and a density gradient. The wing imaginal discs collected with this protocol in less than three hours are as well preserved as those collected by dissection. The dissociation and filtration of the extract allow the isolation of a large amount of wing imaginal disc cells.
Introduction
In Holometabola, imaginal discs are larval internal epithelial structures that are precursors of external organs in the adult, such as wings or legs. These structures are widely used in the model organism Drosophila melanogaster for developmental studies. The structures are accessible, and most genetic tools engineered for this species are designed to be used in these organs, including drivers to use the UAS-GAL4 system [1] or mitotic clones [2] enabling constructs, among others. Many genes and signaling cascades involved in developmental control are conserved between mammals and Drosophila melanogaster [3], making imaginal discs an interesting model to study processes involved in both development [4,5] and human diseases (see for example [6,7]). Such studies led to significant discoveries in recent decades, including homeotic genes [8] and the Hippo pathway [9]. However, although Drosophila wing imaginal disc has proven to be a great model for genetic or cell biology approaches, its use is more difficult in biochemistry or molecular biology experiments that need large amounts of starting material. For instance, a chromatin immunoprecipitation (ChIP) requires 400 wing imaginal discs per sample [10], and this number can be even higher if the factor of interest has a low concentration or is expressed only in a subpopulation of the disc cells. Manual dissection is the classical way to collect imaginal discs. Thus, doing experiments such as a ChIP with associated corresponding controls rapidly leads to the need for dissecting hundreds of larvae. Dissection is time-consuming and raises reproducibility issues, as several scientists may be involved. Subtle variations may occur throughout the dissection process, imaginal disc storage, or crosslinking step. Molecular biology experiments thus necessitate an alternative method to isolate imaginal discs en masse, in a reproducible manner, and by a single experimenter in a limited amount of time.
Since imaginal discs such as wing ones are precursors of non-essential organs, they are instrumental in studying cell death as it is possible to induce cell death during the development in these structures without affecting the survival of the specimen [11]. Our team is interested in apoptosis and tissue homeostasis. Our favorite model organ is the wing imaginal disc [12,13]. However, as our projects necessitated a large number of discs, we decided to invest in developing a mass wing imaginal discs enrichment protocol. This protocol would allow us to harvest a significantly greater sample size of wing imaginal discs than previously achieved for larger-scale experiments. A few articles describe such protocols. The first two were published in the 1960 and 1970s [14,15] and offer mass isolation methods that allow the recovery of hundreds of imaginal discs using density gradients. Methods related to the large-scale collection of fruit fly larval tissues described during the 1960-1970s [14,15] cannot be easily reproduced due to the lack of documentation of methods and the unavailability of some tools. More recently, Marty et al. [16] presented a new version of this protocol, but their goal was only to roughly separate organs and not to obtain pure imaginal disc fractions. In this study, the authors used a Biosorter ® (Union Biometrica Inc., Holliston, MA, USA) to isolate wing imaginal discs according to their shape, size, or Green Fluorescent Protein (GFP) pattern. However, this device is expensive and not commonly available. Therefore, we wanted to develop a mass enrichment protocol routinely usable in any laboratory.
We propose here an imaginal disc mass enrichment protocol without manual dissection that allows the recovery of a large number of discs in an optimized time range, doable entirely by one experimenter, using density gradients and sedimentation; see the overall protocol in Figure 1. This method allows the experimenter to recover about 11.5% of the wing imaginal discs input within two to three hours in an enriched fraction (about 50%) that can be easily dissociated and filtrated to recover the wing imaginal disc cells. The protocol is optimized for wing imaginal discs, but its adaptation to other larvae organs such as eye-antenna imaginal discs or brains could be possible after different set-ups, especially the nature of the Ficoll gradient layers.
Figure 1.
Overview of the mass purification protocol. L3 larvae obtained from synchronized egglaying are collected en masse using a wash bottle. The larvae were initially ground using a Gentle-MACS TM (Miltenyi Biotec, Bergisch Gladbach, North Rhine-Westphalia, Germany). The material was then filtered through a series of strainers with decreasing mesh size, allowing the selection of elements between 100 µ m and 200 µ m. This material is resuspended in 10% (w/v) Ficoll and loaded on top of a 15:20:25% (w/v) Ficoll gradient. After centrifugation, wing imaginal discs (WID) are found at the 15:20% (w/v) interface along with some salivary gland (SG) pieces. After dissociation, filtering the cells through a 40 µ m filter is enough to separate salivary gland cells from disc cells.
Fly Stocks
Flies were raised at 25 °C on a standard medium. The vg-GAL4 strain is a generous gift from Joel Silber (Institut Jacques Monod, Université Paris Cité, France). The UAS-mCD8-GFP was obtained from the Bloomington Drosophila Stock Center (BL-32185).
Protocol for Mass Enrichment of Wing Imaginal Discs
Third-instar larvae were collected by flushing the side of their rearing tubes. The larvae were then ground using the GentleMACS™ device (Miltenyi Biotec, Bergisch Gladbach, Germany) and filtered through a series of sieves. The resulting material was loaded in 10% Ficoll solution (w/v) on top of a 15:20:25% gradient. After centrifugation, the 15:20% interface containing the enriched wing imaginal discs was collected and rehydrated in Ringer 1X (Supplementary File S1). After dissociation, filtration on 40 µ m retained the cells of the salivary glands and allowed the recovery of only imaginal disc cells
Immunostaining and Images Acquisition
For the "dissected" condition, wing imaginal discs were dissected from third-instar larvae in 1X Phosphate Buffered Saline (PBS), pH 7.6. Discs obtained by mass enrichment or dissection were fixed with 3.7% formaldehyde in 1X PBS for 20 min at room temperature and washed three times for 10 min in PBST (1X PBS, 0.3% Triton X-100).
Apoptosis was analyzed as previously described in de Noiron et al. [17]. In brief, the discs were blocked for 1 h in PBST-BSA (1X PBS, 0.3% Tween 20, 2% Bovin Serum Albumin) and incubated overnight with 1:100 dilution of anti-cleaved Drosophila Dcp-1 (Asp216, Cell Signaling Technology) at 4 °C. The following day, after 3 washes in PBST, Figure 1. Overview of the mass purification protocol. L3 larvae obtained from synchronized egg-laying are collected en masse using a wash bottle. The larvae were initially ground using a GentleMACS TM (Miltenyi Biotec, Bergisch Gladbach, North Rhine-Westphalia, Germany). The material was then filtered through a series of strainers with decreasing mesh size, allowing the selection of elements between 100 µm and 200 µm. This material is resuspended in 10% (w/v) Ficoll and loaded on top of a 15:20:25% (w/v) Ficoll gradient. After centrifugation, wing imaginal discs (WID) are found at the 15:20% (w/v) interface along with some salivary gland (SG) pieces. After dissociation, filtering the cells through a 40 µm filter is enough to separate salivary gland cells from disc cells.
Fly Stocks
Flies were raised at 25 • C on a standard medium. The vg-GAL4 strain is a generous gift from Joel Silber (Institut Jacques Monod, Université Paris Cité, France). The UAS-mCD8-GFP was obtained from the Bloomington Drosophila Stock Center (BL-32185).
Protocol for Mass Enrichment of Wing Imaginal Discs
Third-instar larvae were collected by flushing the side of their rearing tubes. The larvae were then ground using the GentleMACS™ device (Miltenyi Biotec, Bergisch Gladbach, Germany) and filtered through a series of sieves. The resulting material was loaded in 10% Ficoll solution (w/v) on top of a 15:20:25% gradient. After centrifugation, the 15:20% interface containing the enriched wing imaginal discs was collected and rehydrated in Ringer 1X (Supplementary File S1). After dissociation, filtration on 40 µm retained the cells of the salivary glands and allowed the recovery of only imaginal disc cells.
Immunostaining and Images Acquisition
For the "dissected" condition, wing imaginal discs were dissected from third-instar larvae in 1X Phosphate Buffered Saline (PBS), pH 7.6. Discs obtained by mass enrichment or dissection were fixed with 3.7% formaldehyde in 1X PBS for 20 min at room temperature and washed three times for 10 min in PBST (1X PBS, 0.3% Triton X-100).
Apoptosis was analyzed as previously described in de Noiron et al. [17]. In brief, the discs were blocked for 1 h in PBST-BSA (1X PBS, 0.3% Tween 20, 2% Bovin Serum Albumin) and incubated overnight with 1:100 dilution of anti-cleaved Drosophila Dcp-1 (Asp216, Cell Signaling Technology) at 4 • C. The following day, after 3 washes in PBST, wing discs were incubated for two hours with Alexa-568-coupled anti-rabbit secondary antibody (A-11011, Invitrogen) diluted to 1:400 in PBST. Finally, wing discs were mounted in ProLong Diamond (Invitrogen, Waltham, MA, USA), and images were acquired using a Leica SP8 confocal microscope (Leica Camera, Wetzlar, Germany) at 568 nm. At least 30 wing imaginal discs are analyzed for each condition. Image analysis was performed on Fiji with the macro described in de Noiron_2021 [17], whose main steps are median filter application, stacking (Z-project), threshold determination, and signal quantification.
Flow Cytometry Analysis
Third-instar larvae expressing UAS-mCD8-GFP under the control of the vestigial-GAL4 driver were used in [18]. Wing imaginal discs recovered by either dissection or mass enrichment protocol were dissociated using a 1:50 dilution (final concentration 0.1% w/v) of 5% (w/v)% protease (Sigma-Aldrich P8811) in 1X Ringer incubated for 20 min at 25 • C under gentle shaking (300 rpm). Dissociation was completed by gently passing the cells through a P1000 tip. After pelleting, cells were resuspended in 1X Ringer and filtered through a 40 µm sieve to retain salivary glands cells and eventual debris. Cells were then analyzed using the BD FACSAria™ III (BD Biosciences, Franklin Lakes, NJ, USA) equipped with a 488 nm laser line and the BD FACSDiva™ software (9.0.1, BD Biosciences, Franklin Lakes, NJ, USA). Cells were selected through forward scatter (FSC) to keep out debris and cell clusters. They were then sorted based on whether or not they expressed GFP.
For the cell death assessment by viability dye, 10 third-instar larvae expressing UAS-mCD8-GFP under the control of the vestigial-GAL4 driver were used for each group. Wing imaginal discs recovered by either dissection in 1X PBS, dissection in 1X Ringer, or mass enrichment protocol in 1X Ringer were dissociated using a 0.1% (w/v) solution of protease (Sigma-Aldrich P8811) in a 1X PBS or 1X Ringer and incubated for 20 min at 25 • C under gentle shaking (300 rpm). Dissociation was completed by gently passing the cells through a P1000 tip. After pelleting, the cells were incubated for 3 min on ice in viability dye solution (Fixable Viability Dye eFluor™ 780, eBioscience™, Thermo Fisher Scientific, Waltham, MA, USA) diluted to 1:1000 in 1X PBS or 1X Ringer. After a wash in 1X PBS or 1X Ringer, cells were resuspended in 1X PBS or 1X Ringer and filtered through a 40 µm sieve. The fluorescence of the dye was then measured using the BD FACSARia™ III equipped with a 635 nm laser line and the BD FACSDiva™ software. Cells were selected through FSC, excluding debris and cell clusters. Data analysis was performed with FlowJo software (10.8.1 version, FlowJo, Ashland, OR, USA). Graphs and statistical analysis were performed using RStudio (ggplot2 and ggpubr packages, RStudio, Boston, MA, USA) with a significance threshold of 5%.
Grinding and Buffer Parameters to Release Internal Organs without Damaging Them
Few protocols of mass enrichment for imaginal discs have previously been described. They are all based on the protocol developed by Fristrom and Mitchell [14], but their use needs some modernizing. This protocol requires dismantling larvae to release internal organs later isolated by differential sedimentation. As we mainly work on wing imaginal discs, we focused on this organ and chose to monitor its enrichment process with the help of fluorescent wing discs. To this end, we used larvae expressing GFP under the control of the vestigial driver, a driver allowing expression in a wide band at the dorso-ventral frontier of wing and haltere wing discs. Like many other wing imaginal disc drivers [1,19], it also displays leaky expression in the salivary glands, which is not an issue as salivary glands are easily distinguishable from wing imaginal discs.
Grinding Larvae: A Fine-Tuning between Breaking the Cuticle and Keeping the Organs Undamaged
The first step of this protocol consists of dismantling the larvae without damaging the imaginal discs. Protocols from Mitchell and Cohen laboratories used either a meat grinder that is in all likelihood not sold anymore [14] or a custom-made device specifically developed in this laboratory for this particular usage [15]. Our goal was to use a commercial tissue dissociator to ensure the reproducibility of the larvae grinding. Thus, we tried two devices. The first was a bead beater like the ones usually available in most laboratories, in our case, the Precellys ® (Bertin Technologies, Montigny-le-Bretonneux, France) from Bertin. The second was the GentleMACS™ from Miltenyi, which was successfully used for such applications by the Basler lab [16]. We expected it to perform a gentle grinding, as it was developed for tissue dissociation rather than whole lysis. As these devices are usually dedicated to mammalian tissue lysis or dissociation, their ability to grind Drosophila larvae is not documented. The grinding process was visually monitored by assessing the integrity and location (released from the larvae or not) of fluorescent discs at the end of the process.
The development tests were mainly carried out using 2.5 mL of larvae, representing around 600 individuals. This larvae amount is collected by flushing the side of six vials. Adult flies were transferred to a fresh medium every 24 h to ensure that all offspring were the same age and obtained in sufficient quantity. We used 12 vials containing 15 virgin females and 6 males to collect the same amount (2.5 mL) of larvae from a cross where the larvae of interest represented only 50% of the progeny. These values must be adapted according to the viability of the individuals of interest.
For the assays with the Precellys ® , we used 2 mL tubes containing either 2.8 mm diameter beads or a mix of 1.4 and 2.8 mm beads with different grinding cycles. We found that 2.8 mm beads alone were more efficient in breaking cuticles and that the recovered imaginal discs were in better condition than using the bead mix. However, the yield was low since most discs remained attached to the larvae (data not shown). Furthermore, considering the quick mass enrichment protocol we wanted to set up, we did not further pursue the set-up with this device as it can only handle a limited number of larvae (100-120/tube) at once, and the recovery of the material after grinding was challenging to handle-these drawbacks are incompatible with mass enrichment.
The GentleMACS™ uses tubes with spinning helix-like structures and comes with several pre-registered programs of different speeds and/or duration. Two types of tubes are available; the C tubes offer a gentler grinding and are dedicated to tissue dissociation into individual cells, whereas M tubes offer stronger grinding and are recommended for complete cell lysis as required for biomolecule extraction. We first used the "liver 1" programs (48 rotations per round, 15 s) that were previously used for such purpose [16] or "liver 2" programs (78 rpr, 24 s) with the two kinds of tubes (Supplementary Figure S1). In our hands, these programs were too gentle to detach the imaginal discs efficiently, and upon advice from the technical support from Miltenyi Biotec, we tested the "brain1" programs (116 rpr, 36 s) or "brain" 2 programs (100 rpr, 30 s), the second of which turned out to be much more appropriate. We observed that with the M tubes, larvae were ground either not enough or too much, resulting in damaged imaginal discs. By contrast, the C tubes allowed a dismantling of the larvae that released intact imaginal discs (Supplementary Figure S1). We used the GentleMACS™ with the "brain 2" program and the C tubes based on these tests. This grinding step is repeated five times until most, if not all, internal organs are released from the larvae.
Filtration and Ficoll Gradient: Steps to Separate Organs of Interest from the Others
The second step of the protocol consists of isolating imaginal discs from the other organs. As per the protocol from Marty [16], we first loaded the ground material, without Biology 2022, 11, 1384 6 of 16 further treatment, directly on a Ficoll gradient. However, this was not satisfying, since it led to a gradient saturation. We then added a step of rough separation of the organs based on their size. To this end, we used a series of filters with different meshing. We first tried a simple filtration step with only a 100 µm strainer that kept wing imaginal discs and let us eliminate little pieces of organs (such as gut pieces). However, the ground fractions also contain many contaminants of larger size and/or density, such as pieces of cuticles or mouth hooks. Given the amount of material loaded on the filter, it was still rapidly clogged with these tissues. We thus decided to add a 500 µm filter before the 100 µm one to eliminate large structures, such as cuticles or tiny unground larvae, but the output of this 500 µm was still not clean enough. After numerous tries with different strainer sizes, we combined of 500 µm, 300 µm, and 200 µm in serial filtration ( Figure 2). This filter combination retains all empty cuticles, larvae that are not fully grinded, and most mouth hooks while letting imaginal discs go through. This filtrate then undergoes 100 µm filtration that retains imaginal discs but allows small-sized contaminants such as fat body and small gut pieces to be removed. The material recovered from the 100 µm strainer contains imaginal discs and structures with the same size range, such as brains, proventriculus, "string-like" structures such as salivary glands and gut pieces, and some fat body pieces.
Still, to avoid gradient saturation, we added a sedimentation step. Indeed, this was a means to further improve the purity as gut and fat body pieces are much less dense than wing imaginal discs. To assess the efficiency of the sedimentation and purification steps, we needed a way to visually localize wing imaginal discs in the tubes. To do so, we took advantage of LacZ expression in those discs and added a fixation-less staining step of the beta-galactosidase [20] (Supplementary Figure S2). Finally, sedimentation with low-speed centrifugation allowed cleaning out this fraction.
Once the input for the Ficoll gradient separation was cleared from most unwanted larval elements, we worked on the enrichment step. We tried many gradient configurations (number, concentration, and volume of layers). We present here only a subset of them representing the milestones of the development of the method (Figure 2). The most recently published protocol uses a Ficoll gradient prepared with PBS, which allows the recovery of the imaginal discs at the 16:25% interface [16]. The oldest method comprises two gradients, the first placing imaginal discs at the interface with 14:19% of a Ringer-prepared Ficoll gradient and the second allowing further separation of this fraction using a continuous 14-24% gradient [14]. The latest should provide the purest imaginal discs fractions, whereas the most recent gives much less pure preparation, but this was not a problem for the authors since they further sorted the enriched discs using a BioSorter. We wanted our method to stand in between, being the easiest and shortest possible but still favoring purity over yield. We then decided to improve the quality of the output fraction using a single gradient. We started with PBS-prepared Ficoll gradients, as used in the single gradient protocol. As in previous experiments, discs were found in the interphase in the 14-25% range. We expected a disc enrichment in the 15:20% interface, with 15 and 20% layers becoming the basic set-up of our method. We soon noticed that adding a layer with more diluted Ficoll (i.e., 10%) improved the separation. Adding another layer of 25% Ficoll at the bottom of the gradient further improved the enrichment of imaginal discs. Our final protocol corresponds to the "E" gradient in Figure 2A. It comprises a Ficoll gradient with a first 10% layer in which the ground material is resuspended. This layer is then loaded on top of a 15:20:25% gradient. Using filter tips coated with fat larvae tissue prevents the loss of wing imaginal discs sticking to the tips. This improvement and precise pipetting at the interphase of the gradient significantly increased the protocol yield compared to the one in Figure 2A "E". Indeed, in six enrichment experiments from 2.5 mL of third instar larvae (about 600 larvae), we obtained an average of 138 wing imaginal discs per preparation, corresponding to a yield of 11.5%. Still, to avoid gradient saturation, we added a sedimentation step. Indeed, this was a means to further improve the purity as gut and fat body pieces are much less dense than wing imaginal discs. To assess the efficiency of the sedimentation and purification steps, we needed a way to visually localize wing imaginal discs in the tubes. To do so, we took advantage of LacZ expression in those discs and added a fixation-less staining step of the beta-galactosidase [20] (Supplementary Figure S2). Finally, sedimentation with low-speed centrifugation allowed cleaning out this fraction. After centrifugation, wing imaginal discs, along with eye and antenna discs and salivary gland pieces, were found at the 15:20% interface. Some more wing imaginal discs could be recovered at the 20:25% interface, but we chose not to recover them because the interface also contains the vast majority of salivary glands and mouth hooks. For our applications, cells from imaginal discs were easy to separate from salivary gland cells, which are the primary contaminants in this case.
Finding the Best Buffer for Isolating and Maintaining Intact Imaginal Discs
An important difference between the protocols of Marty et al. 2014 [16] and Fristrom and Mitchell 1965 [14] is the buffer used throughout the enrichment process. The original article by Fristrom and Mitchell [14] used a Ringer solution [21], whereas the most recently used solution was PBS. We started setting up the protocol using PBS, the cheapest and most commonly used buffer. However, after Ficoll gradient enrichment, the recovered discs looked dehydrated (data not shown). Since discs recovered in Fristrom and Mitchell's article [14], for which Ringer solution was used, could be successfully transplanted, we considered switching to this buffer. After replacing PBS with Ringer solution during manual dissections, we observed that discs had a better appearance. We performed a viability assay using a dye that enters damaged cells with a permeabilized membrane to assess whether imaginal disc cells were indeed in more physiological conditions in a Ringer solution than in PBS.
We used this assay to determine whether one of the buffers, PBS or Ringer, is more damaging than the other. Confirming our observation, PBS is a much more damaging agent as cells obtained by dissection in PBS were more than twice as frequently labeled compared to cells treated in Ringer solution ( Figure 3D). In addition, it could be envisioned that the mass enrichment process may be more damaging than dissection. Grinding could exert more mechanical stress than dissection, while using the Ficoll polymer may induce osmotic stress. The analysis of the impact of the recovery method showed that the mass enrichment process is not more damaging for the cells than dissection in this assay (Figure 3, compare A and B and see D). To further ensure that the mass enrichment process did not induce ectopic apoptosis, we performed apoptotic cell labeling using an antibody against activated caspases (anti-cleaved Dcp-1) [22]. This staining showed that mass enrichment does not induce more apoptosis than classical manual dissection (Supplementary Figure S3). The slight difference between those two conditions could be explained by the difference in the buffer used (PBS for dissection, Ringer for grinding). Finally, switching PBS to the Ringer solution improved the quality of the material recovered and the enrichment process (compare Figure 2A protocol "C" with Figure 2A protocol "D" and "E") as it decreased the number of contaminants while increasing the number of discs recovered.
We continued the quality control of the material recovered by mass enrichment by analyzing the number of GFP-positive cells using flow cytometry. This analysis was carried out on larvae expressing the GFP in the wing disc under the control of the vg-GAL4 driver. This driver induces GFP expression in a wide band at the dorso-ventral frontier, representing roughly 40% of the cells. Visual inspection of the GFP pattern readily allows the recognition of the vg pattern, confirming that discs do not undergo such stress that would lead to GFP extinction. Since salivary gland cells are bigger than wing imaginal disc cells [23], it is easy to discard them; after dissociation, passing the cell suspension through a 40 µm filter is enough to retain the bigger cells. After dissociation, filtration, and flow cytometry analysis of either mass-enriched or dissected wing imaginal discs, the percentage of GFP-positive cells was measured (Figure 4). plained by the difference in the buffer used (PBS for dissection, Ringer for grinding). Finally, switching PBS to the Ringer solution improved the quality of the material recovered and the enrichment process (compare Figure 2A protocol "C" with Figure 2A protocol "D" and "E") as it decreased the number of contaminants while increasing the number of discs recovered. The experiment was performed three times, and each point on the dot plot represents the death rate for one experiment. The triangular point represents the mean for each data set. Statistical significance was determined by one-way ANOVA using Bonferroni's correction for multiple comparison testing.
We continued the quality control of the material recovered by mass enrichment by analyzing the number of GFP-positive cells using flow cytometry. This analysis was carried out on larvae expressing the GFP in the wing disc under the control of the vg-GAL4 driver. This driver induces GFP expression in a wide band at the dorso-ventral frontier, representing roughly 40% of the cells. Visual inspection of the GFP pattern readily allows the recognition of the vg pattern, confirming that discs do not undergo such stress that would lead to GFP extinction. Since salivary gland cells are bigger than wing imaginal disc cells [23], it is easy to discard them; after dissociation, passing the cell suspension through a 40 µ m filter is enough to retain the bigger cells. After dissociation, filtration, and flow cytometry analysis of either mass-enriched or dissected wing imaginal discs, the percentage of GFP-positive cells was measured (Figure 4). Mass enrichment of discs led to a slight decrease in the proportion of GFP-positive cells compared with dissected discs (from 47.7% to 42.7%), but this decrease was expected as ground samples were not 100% pure. This fraction contains other organs (such as other discs) whose cells do not express GFP, thus increasing the GFP-negative fraction. This limited decrease indicates that the wing imaginal disc fraction recovered by mass enrichment is relatively pure. This result confirms our visual assessment and is supported by the fact that after dissociation and filtration, whatever the protocol used, the percentage of cells recovered compared to the total number of events counted by cytometry is not significantly different (Figure 4).
The tissue structure was also explored using fluorescent staining of vg-GAL4, mCD8-GFP, wing imaginal discs to stress the quality of the wing imaginal discs recovered by the mass enrichment protocol. Hoechst and phalloidin-ATTO 655 stained the nucleus and the plasma membrane, respectively. mCD8GFP protein is known to localize at the plasma membrane. The experiment was performed in parallel on wing discs isolated by dissection or mass enrichment protocol. The results in Figure 5 show that the wing disc cells obtained by the mass enrichment protocol have a normal structure. The results in Figure 5 show that the wing disc cells obtained by the mass enrichment protocol have a normal structure. The slight differences in intensity or morphology observed for staining within a single image (e.g., Figure 5C) or between images obtained under the two conditions (dissection or mass enrichment protocol) can be explained by the fact that the imaginal disc is a folded pseudo-columnar epithelium. Thus, the image plane is not at the same apical-basal level Mass enrichment of discs led to a slight decrease in the proportion of GFP-positive cells compared with dissected discs (from 47.7% to 42.7%), but this decrease was expected as ground samples were not 100% pure. This fraction contains other organs (such as other discs) whose cells do not express GFP, thus increasing the GFP-negative fraction. This limited decrease indicates that the wing imaginal disc fraction recovered by mass enrichment is relatively pure. This result confirms our visual assessment and is supported by the fact that after dissociation and filtration, whatever the protocol used, the percentage of cells recovered compared to the total number of events counted by cytometry is not significantly different (Figure 4).
The tissue structure was also explored using fluorescent staining of vg-GAL4, mCD8-GFP, wing imaginal discs to stress the quality of the wing imaginal discs recovered by the mass enrichment protocol. Hoechst and phalloidin-ATTO 655 stained the nucleus and the plasma membrane, respectively. mCD8GFP protein is known to localize at the plasma membrane. The experiment was performed in parallel on wing discs isolated by dissection or mass enrichment protocol. The results in Figure 5 show that the wing disc cells obtained by the mass enrichment protocol have a normal structure. The results in Figure 5 show that the wing disc cells obtained by the mass enrichment protocol have a normal structure. The slight differences in intensity or morphology observed for staining within a single image (e.g., Figure 5C) or between images obtained under the two conditions (dissection or mass enrichment protocol) can be explained by the fact that the imaginal disc is a folded pseudo-columnar epithelium. Thus, the image plane is not at the same apical-basal level for all cells. The results do not reveal any significant difference in the quality of the discs recovered by both methods. for all cells. The results do not reveal any significant difference in the quality of the discs recovered by both methods. Figure 5. Cell structure in wing imaginal discs recovered by dissection or mass-enrichment protocol. Cell structure was assessed in wing imaginal discs recovered by dissection (left column) or massenrichment protocol ("grinding", right column). Confocal images were acquired with 10X objective, and only a slice of this pseudocolumnar epithelium is shown (A,A'). Images (B-E') correspond to the same wing imaginal discs and were acquired with a 63X objective. They show a zoomed-in view of the white boxes in (A,A'). Nuclei were visualized using Hoechst (B,B'), plasma membrane using phalloidin ATTO655 (C,C'). GFP is coupled to mCD8 protein, whose expression is driven by the vg-GAL4 driver (D,D'). Representative overlays of nuclei staining (blue) and GFP (green) are shown in whole discs (A,A') or in an enlargement of the wing imaginal disc posterior zone. Images E and E' are an overlay of ((B,D,B') and (D')), respectively. Scale bars correspond to 100 µ m (A,A') or 10 µ m (B-E').
Final Protocol (See Supplementary File S1 for Details)
Flushing the side of the tubes with water allows the recovery of L3 larvae from synchronized egg-laying. After rinsing in water to remove the growth medium, larvae are Figure 5. Cell structure in wing imaginal discs recovered by dissection or mass-enrichment protocol. Cell structure was assessed in wing imaginal discs recovered by dissection (left column) or massenrichment protocol ("grinding", right column). Confocal images were acquired with 10X objective, and only a slice of this pseudocolumnar epithelium is shown (A,A'). Images (B-E') correspond to the same wing imaginal discs and were acquired with a 63X objective. They show a zoomed-in view of the white boxes in (A,A'). Nuclei were visualized using Hoechst (B,B'), plasma membrane using phalloidin ATTO655 (C,C'). GFP is coupled to mCD8 protein, whose expression is driven by the vg-GAL4 driver (D,D'). Representative overlays of nuclei staining (blue) and GFP (green) are shown in whole discs (A,A') or in an enlargement of the wing imaginal disc posterior zone. Images E and E' are an overlay of ((B,D,B') and (D')), respectively. Scale bars correspond to 100 µm (A,A') or 10 µm (B-E').
Final Protocol (See Supplementary File S1 for Details)
Flushing the side of the tubes with water allows the recovery of L3 larvae from synchronized egg-laying. After rinsing in water to remove the growth medium, larvae are transferred in GentleMACS C tubes and ground in Ringer solution. The ground material is filtered through several strainers of decreasing meshes to separate organs by size; this mainly removes cuticles and fat bodies. These steps are repeated five times to ensure the good separation of organs without damaging them. The filtered organ suspension (100-200 µm) then undergoes a sedimentation step to remove gut pieces, fat body, and some other non-discs organs. Finally, a Ficoll density gradient allows the separation of imaginal discs from other organs. Salivary gland pieces constitute the primary contaminant (Figures 1 and 6). Wing imaginal cells can be isolated and separated from salivary gland cells using a dissociation and a filtration step.
Biology 2022, 11, 12 of 16 transferred in GentleMACS C tubes and ground in Ringer solution. The ground material is filtered through several strainers of decreasing meshes to separate organs by size; this mainly removes cuticles and fat bodies. These steps are repeated five times to ensure the good separation of organs without damaging them. The filtered organ suspension (100-200 µ m) then undergoes a sedimentation step to remove gut pieces, fat body, and some other non-discs organs. Finally, a Ficoll density gradient allows the separation of imaginal discs from other organs. Salivary gland pieces constitute the primary contaminant (Figures 1 and 6). Wing imaginal cells can be isolated and separated from salivary gland cells using a dissociation and a filtration step.
Discussion
The increasing use of global approaches to study gene expression or protein regulation justifies the development of mass purification techniques for Drosophila imaginal discs. By studying protocols from the 1960-1970s, we set up a protocol that allows recovery of third-instar larvae wing imaginal discs without manual dissection. To do so, we had to set up every step, from the grinding of larvae to the density gradient ( Figure 2 and Supplementary Figure S1). During our protocol set-up, a compromise was made between purity and quantity. Therefore, the selected protocol allows obtaining fractions consisting of 50% of wing imaginal discs (Figure 2). There are two types of contaminants: salivary
Discussion
The increasing use of global approaches to study gene expression or protein regulation justifies the development of mass purification techniques for Drosophila imaginal discs. By studying protocols from the 1960-1970s, we set up a protocol that allows recovery of third-instar larvae wing imaginal discs without manual dissection. To do so, we had to set up every step, from the grinding of larvae to the density gradient ( Figure 2 and Supplementary Figure S1). During our protocol set-up, a compromise was made between purity and quantity. Therefore, the selected protocol allows obtaining fractions consisting of 50% of wing imaginal discs (Figure 2). There are two types of contaminants: salivary glands, the most abundant ones, and rare other imaginal discs (mainly eye-antenna discs). However, if pure fractions are needed, wing imaginal discs can be easily picked out with plyers. Moreover, if the objective is to recover the cellular fraction corresponding to the discs, a dissociation step followed by filtration allows removing the cells of the salivary glands.
Indeed, since they are much bigger than those of imaginal discs, passing the solution through a 40 µm strainer is enough to separate them from the wing imaginal disc cells of interest. The advantages and disadvantages of the protocol presented in this article are summarized in Supplementary Figures S4 and S5, which compare the pros and cons of some existing protocols.
The protocol we selected has a yield of 11.5%, which is low compared to manual dissection yield, considered around 85%, by taking into account the organs that might be lost or damaged during the process. However, this mass enrichment protocol presents advantages over manual dissection.
(i) We used around 600 larvae as an input to obtain, on average, 140 wing imaginal discs.
Depending on the researcher, this amount can be dissected in about two to three hours. However, if more wing imaginal discs are needed, the dissection time increases proportionally. One of the advantages of the mass enrichment protocol is that it can process several samples simultaneously by using parallel filtration montages and doing the filtration and centrifugations (sedimentation and Ficoll density gradient) at the same time ( Figure 1). Therefore, its duration is only slightly impacted and should not take more than three hours. (ii) When high amounts of organs are needed, multiple researchers usually dissect them together. As each researcher has their own method and ease of dissection, which may be influenced by the time spent dissecting, this may pose reproducibility problems. The enrichment protocol is standardized and can be performed entirely by one researcher, reducing reproducibility issues, which constitutes another advantage over manual dissection. (iii) Since dissection allows only one sample to be processed at a time, this induces a delay in the treatment of each condition. Our protocol does not present this disadvantage, as several samples can be treated almost simultaneously. Thus, it allows a more homogenous treatment between conditions.
As mentioned in Section 3.1, this protocol requires upstream preparation: for all the individuals to be at the same stage, we recommend transferring the adults to fresh tubes every 24 h. As a reference, we obtain 2.5 mL of larvae (around 600 L3 larvae) with six vials of stable stock lines. When high amounts of material or numerous conditions are needed, these preparation steps can be facilitated using larger vials or even bottles. According to offspring viability, the number of necessary vials is up to the researcher's decision. While the amount of work necessary to produce enough larvae seems large compared to that required for dissection, this technique allows the larvae to be harvested simultaneously, avoiding the treatment delay between each larva. Pooling all the larvae from a larger number of tubes also decreases the batch effect of each vial.
If larvae of interest are obtained by crossing and represent only 50% of the progeny, the number of vials must be doubled to obtain an equal amount of wing imaginal discs. Depending on the parents' genotypes, only a fraction of the offspring may be of interest in certain conditions. Collecting these individuals based on their specific phenotypes (Tb, CyOGFP, etc.) is possible, but this would add a time-consuming step. In this case, using a fluorescent protein expressed in the population of interest presents two advantages. Firstly, it allows the identification of discs of interest at the end of the mass enrichment without selecting the larvae before starting the protocol. Secondly, when a large amount of disc is required to isolate cells of interest, the dissociation and filtration step allows obtaining wing imaginal cells that can be sorted by flow cytometry. Depending on the experimental design, other solutions can be considered, such as tagged proteins of interest and magnetic cell sorting [24].
Interestingly, this protocol can also be adapted to isolate other Drosophila organs by modifying the filtration or the Ficoll gradient steps using the indications detailed in this specific part. Moreover, to facilitate the tracking of organs throughout the gradient, we adapted a beta-galactosidase staining protocol without fixation to color the organs blue and have a visual clue of their localization [20] (Supplementary Figure S2). This staining can be applied to help isolate other organs.
Older protocols were able to graft the collected organs, which became normal adult structures [14]. We did not test this ability. However, by using a cell viability assay ( Figure 3) and controlling the apoptosis level (Supplementary Figure S3), we showed that this protocol does not induce more damage than the dissection of wing imaginal discs. The mass enrichment protocol is standardized and achievable quickly, allowing a more homogenous treatment of wing imaginal discs. In the case of experiments that necessitate the analysis of a large number of imaginal discs, this constitutes a considerable advantage of a mass purification protocol. Remarkably, our analysis of cell viability strongly emphasizes the importance of cell buffer ( Figure 3) and shows that cell survival is higher in Ringer buffer than in the PBS buffer, yet is commonly used for dissections.
As another quality control, we compared the expression of GFP in wing imaginal discs in which the vestigial driver drove GFP expression. Visual examination showed no notable difference in GFP levels between discs (see Figures 5 and 6 for GFP-expressing wing imaginal discs recovered through the mass enrichment protocol). The percentage of GFP-positive cells estimated through flow cytometry corresponding to discs recovered by mass enrichment protocol and dissection is quite similar. The slight observed difference can be easily explained by the presence of a small number of other organs, such as eyeantennae discs, which "dilute" the GFP-positive cells when using a mass enrichment protocol. Nevertheless, GFP analysis shows that obtained discs are homogenous and that the purification protocol is reproducible. In many approaches, the cells of interest represent only a subpopulation of the cells in the imaginal disc. In this case, obtaining a sufficient number of cells is difficult to achieve by manual dissection. One of the interests of this approach is to allow a large quantity of a subpopulation of imaginal disc cells to be rapidly obtained by coupling the mass enrichment protocol to cell sorting by flow cytometry.
Conclusions
In summary, this paper shows that this mass enrichment protocol allows the rapid purification of a large quantity of wing imaginal discs of equivalent quality to manually dissected discs for multiple samples simultaneously and by a single investigator. Wing imaginal discs recovered by mass enrichment allow the isolation of wing imaginal disc cells suitable for numerous applications such as cytometry analyses (Figures 3 and 4), transcriptomics, and proteomics.
Supplementary Materials: The following supporting information can be downloaded at https: //www.mdpi.com/article/10.3390/biology11101384/s1, Figure S1: Overview of the GentleMACS TM programs tested; Figure S2: Example of a Ficoll gradient with β-galactosidase-stained wing imaginal discs; Figure S3: Apoptosis staining in the wing discs according to the disc enrichment method; Figure S4: Comparison of wing imaginal discs isolation protocols; Figure S5: Summary table of pros and cons of available imaginal discs isolation protocols; File S1: Mass enrichment protocol.
Data Availability Statement:
The data that support the findings of this study are available from the corresponding authors, I.G. and J.C., upon reasonable request.
|
2022-09-25T15:12:12.147Z
|
2022-09-22T00:00:00.000
|
{
"year": 2022,
"sha1": "5591a02b98a648176b8780f68c354ccf315b2f79",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2079-7737/11/10/1384/pdf?version=1663926486",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "838a115478d3b5975968aaa5691df89d711db548",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
248863168
|
pes2o/s2orc
|
v3-fos-license
|
Anomalous circularly polarized light emission caused by the chirality-driven topological electronic properties
Chirality of organic molecules is characterized by selective absorption and emission of circularly-polarized light (CPL). A consensus for chiral emission (absorption) is that the molecular chirality determines the favored light handedness regardless of the light-emitting (incident) direction. Refreshing above textbook knowledge, we discover an unconventional CPL emission effect in organic light-emitting diodes (OLEDs), where oppositely propagating CPLs exhibit opposite handedness. This direction-dependent CPL emission boosts the net polarization rate by orders of magnitude in OLED devices by resolving the long-lasting back-electrode reflection problem. The anomalous CPL emission originates in a ubiquitous topological electronic property in chiral materials, i.e., the orbital-momentum locking. Our work paves the way to design novel chiroptoelectronic devices and reveals that chiral materials, topological electrons, and CPL have intimate connections in the quantum regime.
Introduction
Chirality characterizes parity symmetry-breaking where a molecule cannot be superposed on its mirror image in chemistry and biology 1,2 . Chiral enantiomers exhibit opposite chiroptical activity when coupling to light 3,4 . In physics, chirality usually refers to the spin-momentum locking of particles such as Weyl fermions 5,6 and the circular polarized light. Chiral organics were recently reported to exhibit a topological feature 7 , in which the electronic orbital and momentum are locked together, to rationalize the intriguing spin selectivity in DNA-type molecules 8,9 . Hence given the intimate relationship between electronic states and light-matter interactions, we were inspired to raise a question: Can topological electronic properties enhance chiroptical activity and therefore advance the rapidly developing (chir)optoelectronic technology 10,11 ?
A future industrial application of organic chiral emissive materials is in circularly polarized organic light-emitting diodes (CP-OLEDs) 12 , which should eliminate the ∼50% internal light loss caused by the contrast-enhancing circular polarizer in OLED displays. Such efficiency gains occur via direct circularly polarized electroluminescence (CP-EL) from the CP-OLED, which can pass through the contrast-enhancing polarizer unhindered. 13 The effectiveness of this strategy depends on the degree of circular polarization of EL, where higher polarization gives better efficiency for the display in the presence of such polarizers. 14 Since the first CP-OLED reported in 1977 15 , the chirality of material in a device was also assumed to be identical to its chirality measured optically in a thin-film (i.e., without current flows).
In another word, CP-EL was considered nearly the same process as CP photoluminescence (CP-PL) [or the inverse process of optical circular dichroism (CD)] due to shared electronic transition, and its CP emission is determined by the product of electric and magnetic transition dipole moments. 16,17 Thus most efforts in this field were made on developing more twisted chiral emitter with stronger magnetic transition dipoles to improve optical chirality 18,19 , without taking current flows in an OLED device into considerations.
More importantly, in terms of device engineering, reflective back-electrode is another key issue. In all prior studies of chiral emissive materials, CP emission is conventionally expected to exhibit the same handedness in both emission directions (forward and back) from the point of recombination, thus any back reflection within the device will invert the handedness of CP emission travelling backwards and cancels out the forward CP emission, reducing the net EL circular polarization that exits the device through the transparent electrode 18,[20][21][22] . Consequently, the magnitude of EL circular polarization from devices becomes much smaller than the corresponding CP-PL which does not suffer issues of reflection 20 (Figure 1a). Even though constructing semitransparent OLEDs can, to some extent, mitigate the problem of reflection, such a strategy reduces the overall the device performance in a displays, negating to the original intention of energy saving at the polarizer. 18 Among all CP-OLEDs reported, chiral polymeric materials 14, 23-26 demonstrate significant circular polarization in PL and EL, several orders of magnitudes stronger than other chiral emissive systems 18, 27-29 (see Figure 2a). Despite the analysis above, when constructing optoelectronic devices from such materials, their CP-EL remains equal, or sometime is even enhanced compared to CP-PL or CD. Although previous theoretical 30, 31 and experimental 14,23,24 work attributed the strong optical circular dichroism to a predominately excitonic origin, these analyses cannot account for the comparable or enhanced circular polarization in EL devices, given the expected detrimental effect of back-electrode reflection.
In this work, we discover an anomalous light emission phenomenon from chiral polymeric CP-OLEDs. For the chiral polymeric materials under study, CP-EL exhibits opposite handedness in forward and backward emission directions, counter-intuitive to what is usually expected in EL or PL ( Figure 1b). With such direction-dependent CP emission, the back-reflected light exhibits the same handedness as the forward emission, avoiding the polarization cancellation which occurs in devices using other materials and boosting the net CP-EL exiting the device 18,20 . Furthermore, for the first time, we investigate the origin of current flows on the CP-EL, where the its handedness can also be switched by reversing the current flow in an OLED. We propose that the directional CP-EL observed is caused by the topological nature of the electronic wave functions in chiral polymers. Because of orbital-momentum locking 7 , the current flow induces nonequilibrium orbital polarization in electron and hole carriers. Therefore, finite angular momentum transfers from electron/hole orbital to the photon spin in the optical transition. When they have the same spin, the counter-propagating CP lights exhibit opposite handedness. This orbital polarization effect rationalizes that the handedness of CP light is determined both by the current direction and the emission direction. This model reveals an exotic CP-EL mechanism from the electric transition dipole caused by current-induced time-reversal breaking, generally displaying larger circular polarization than PL or CD which involve both electric and magnetic transition dipoles. Our work paves a path to design novel chiroptoelectronic devices with strong circular polarization.
Results
Chiral polymer blend consisting of an achiral light-emitting polymer (i.e., poly (9,9- Fig. 1) and a non-emissive chiral additive (i.e., [P]-aza [6]helicene) was selected for the investigation of CP-EL. Annealed chiral polymer blends demonstrate strong and robust induced optical CD with the absorption dissymmetry factor (g abs ) of ∼ 0.6 (see Fig. S1) 13,23 , calculated in the following way: where I L/R is the irradiance recorded from the CP-OLEDs. However, despite a fixed absolute stereochemistry of chiral material in the emissive layer of both devices, the sign of the CP-EL [P]-aza [6]helicene.
signals was found to be dependent on the device structures. When the emission direction relative to the current direction is switched, the inverted CP-OLED emits right-handed circularly polarized light through ITO with a g EL of -0.33. Apart from the emission direction-dependent CP-EL signals in conventional versus inverted devices, we detected no evidence of erosion of g EL by the reflective electrodes. Compared with other reported CP-OLEDs 18, 27-29 , polyfluorene-based CP-OLEDs we developed exhibit one of the highest known g EL values (Fig. 2a). In contrast, lanthanide complexes exhibit intrinsically high PL dissymmetry (g P L ) 20 , but the g EL recorded from the transparent electrode dramatically decreases when increasing the thickness of the reflective metal electrode.
To compare our results with other previously reported CP-OLEDs, we performed CP-EL measurements on semi-transparent OLEDs in both conventional and inverted CP-OLEDs (Fig. 2c).
Surprisingly, emission direction-dependent CP-EL behavior was observed in both device structures, where the CP-EL from forward and backward emission (i.e., through semi-reflective electrode) exhibit the opposite handedness. Considering this emission direction-dependent dissymmetry factor is only observable in EL but not for CP-PL or CD of the chiral thin films (Fig. S2), we speculate that this behavior is associated with the flow of charge carriers within the devices. To unambiguously describe and compare the emission direction-dependent CP-EL signals in two device architectures, we define the emission direction relative to the charge carrier flow direction (See Table S1 and Scheme S1. holes can also directly recombine by remembering their propagating direction. As we will show, the electron/hole momentum is locked to its orbital angular momentum (OAM) in a chiral molecule. In CP-EL, opposite current flow leads to opposite OAM polarization in electrons/holes and eventually induces opposite spin of the CP light, i.e., the direction-dependent CP handedness. In the following discussions, we refer OAM or orbital to that of the electron/hole and refer spin to that of CP light if not specified.
Next, we will revisit the general theory that describes the CP emission effect. According to Fermi's golden rule, the emission rate of CP light is, where |0 (|1 ) represents the ground (excited) state with energy 0 ( 1 ), H is the light interaction Therefore, the leading term of CP light emission can be derived as, where x 01 = 0| x |1 and m 01 x = 0| m x |1 represent the electric and magnetic transition dipoles, respectively, and I 0 = |E 0 | 2 . We note m = (m x , m y , m z ), r = (x, y, z) and δ for the same δfunction in Eq. 3. The second term in Eq. 4 is routinely employed to understand CD, CP-PL or CP-EL for organic/inorganic systems and has been called natural chiroptical activity 35 Therefore, the first term was generally ignored when studying organic molecules. It was referred to as the magnetic CD 38,39 in absorption of magnetic materials or in a external magnetic field.
However, if electrons and holes carry finite velocities before recombination, the first term cannot be naively neglected. In other words, the current flow can induce magnetization, more specifically the orbital magnetization as we will show. The nonequilibrium phase breaks TRS in chiral molecules.
We refer to the first term as the anomalous circular polarization effect (ACPE) here. In such a case, ACPE may contribute more to the net circular polarization than NCPE because the electric field is much stronger than magnetic field for light.
Now we discuss the TRS-breaking of |0, 1 in the presence of current flow. The electron wave function |ψ in a molecule, for example, a sine function like profile (see Fig. 3), can be considered as the superposition of two counter-propagating plane waves, |ψ = |ψ + + |ψ − , where |ψ ± propagate along ±z directions, and |ψ + = |ψ − * because TRS allows |ψ to be real valued. If an electron picks up a velocity along ±z due to a charge current, its wave function reduces from |ψ to |ψ ± . We point out that |ψ ± themselves violate TRS, although |ψ does not.
Next, |ψ ± carry opposite OAM (±l). We can generally describe a positive-moving plane Because ∆l is finite, oppositely emitted lights carry the same spin and exhibit opposite handedness.
Information). In the presence of current along −z, we need replace |0 + (|1 + ) for |0 (|1 ) to evaluate the ACPE in Eq. 4. In this case, ∆l is nonzero in the optical transition from |1 + to |0 + , both of which carry finite OAM, as illustrated in Fig. 3. In addition, reversing current leads to −∆l.
We note that ∆l is gauge invariant although l of a given band depends on the specific gauge. Furthermore, we quantitatively estimate the ACPE and NCPE for the chiral F8BT polymer assembly by ab initio calculations. It is challenging to refine the accurate atomic structure of such chiral aggregates. Without losing generality, we simulate chiral stacking of F8BT molecules and focus on the intermolecular chirality that is associated with the dominant charge transport direction along the layered packing structure (noted as z axis here) 41 . Although |0, 1 can be generally many-body wave functions, we use the highest occupied molecule orbital (HOMO) and lowest unoccupied molecule orbital (LUMO) to represent |0 and |1 , respectively, by ignoring higher-order corrections (like the distortion in excited states) in the calculations. As shown in Figure 4, two layer stacking with a counter-clockwise twisting angle 30 • reshapes HOMO and LUMO wave functions dramatically compared to a single layer of molecule. By calculating the ACPE involving the +z showing the orbital-momentum locking.
moving HOMO and LUMO, we obtain a large dissymmetry factor |g EL | = 0.48 (0.44) for two (three) layer stacking, which is in the same order of magnitude as experimental g EL . The OAM can be evaluated from the phase winding number in the xy plane, verifying the orbital-momentum locking in |0 ± and |1 ± . Because |0, 1 is usually composed by many plane waves, the total value of l is unnecessarily an integer. Better knowledge on the molecular arrangement of chiral polymer assemblies will help improve the prediction power of calculations in the future work.
Additionally, the current-induced magnetization in our experiments is relevant to the orbital rather than spin of electronic states. If electron-spin polarization matters, it would require substantial spin-orbit coupling (SOC) in the device. We know that these organic polymers made of light elements exhibit negligible SOC. Despite that metal electrodes may include heavy elements, the circular polarization rate remains the same for Al, Ag and Au electrodes with largely varied SOC In summary, we report an anomalous phenomenon where the handedness of CP light emission depends on the emission direction. This effect enables us to design unconventional CP-OLED devices with large g EL and immune from the back-electrode reflection. We highlight that the orbit-momentum locking causing ACPE is strongly associated with the charge transport mode in the polymer systems and therefore suggest the following design principles for further development of CP-OLEDs with strong CP-EL. To ensure the entire stack of molecular assemblies exhibit strong ACPE, the emissive sites have to be strongly coupled with chiral transport sites or ideally within the same sites as in our polymer systems. If charge carriers are independently transported, such as in host materials, then they get scattered to random adjacent chiral emissive sites, the net momentum and OAM will be quenched and only NCPE will appear. In this case, CP-EL can no longer be considered as the same origin as CP-PL and circular dichroism where no charge transport and current flows exist.
We propose an ACPE that involves finite angular momentum transfer in the optical transition.
Because ACPE and NCPE come from the first and second-order optical transitions in Eq. 4, ACPE is often much larger than NCPE when TRS is broken. We highlight that the unusual TRS-breaking in ACPE is driven by the nonequilibrium orbital magnetization, which originates in the chiral orbital nature in wave functions. In CP-OLED, such orbital magnetization is caused by the current flow (rather than static magnetization), the impact of which was ignore and unexplored for almost 200 years research of chiral materials 42 . Our work reveals an intriguing unification of chirality in seemingly unrelated aspects: structure geometry, electronic topology, and the handedness of CP light.
Thin-film and device fabrication: The cleaning process for all substrates (fused silica and prepatterned ITO glass, Thin Film Devices Inc., 20 ohms/sq) involved rinsing in an ultrasonic bath with acetone, isopropyl alcohol (IPA), Hellmanex III (Sigma-Aldrich), and deionized water for 30 min. These were transferred to a plasma asher for 3 min at 80 W and 50 W before spincoating for fused silica and prepatterned ITO, respectively. F8BT and aza [6]helicene were dissolved in toluene to a concentration of 30 mg/mL and blended to form a 10% aza [6]helicene solution.
About 130 nm thick emissive layer can be achieved by dynamically spin-coating at 2300 rpms for 1 min. Chiral samples were annealed for 10 min in a nitrogen atmosphere (glovebox, < 0.1 ppm of H 2 O, < 0.1 ppm of O 2 ). Dynamic coating ensures strong chiroptical activity is achieved without giving too thick films, compared to previous studies 14,23 . Organic film thicknesses was monitored using a Dektak 150 surface profiler and the metal thickness was used as displayed from QCM monitor. Circular dichroism measurements were performed using a Chirascan (Applied Photophysics) spectrophotometer.
CP-EL:
Left-handed and right-handed CP emission spectra were collected using a combination of linear polarizer and zero-order quarter-wave plate (546 nm, Thorlabs) placed before detectors. initio Simulation Package (VASP). 43 The generalized gradient approximation (GGA) was used for the exchange-correlation functionals. 44 F8BT molecules were stacked along the z axis in a twisted manner. The molecular cluster model included at least 10Å vacuum distances along all directions.
An energy cutoff of 400 eV was used for the plane wave basis. The total number of plane waves is about 4×10 5 . We note the plane wave by wave vector G = (G x , G y , G z ). In the plane wave basis, for example, the wave function of HOMO, |0 , can be expressed as: where φ 0 (G z ) = Gx,Gy c 0 G e i(Gxx+Gyy) and c 0 G is the plane wave coefficient extracted directly from DFT wave functions. The charge density is ρ 0 = | |0 | 2 . Besides G z z, the phase of the G z propagating wave in Fig. 4 is argφ 0 (G z ). Under electrical current along −z, the CP electric dipole transition amplitudes are calculated as: where * represents taking complex conjugate. One can analyze the symmetry constrain on CP emission from Eq. (6). If inversion symmetry is present, then c 0,1 G = p 0,1 c 0,1 −G with p 0,1 = ±1 refers to the parity eigenvalue of |0 or |1 . On the other hand the time-reversal symmetry requires that c 0,1 G = (c 0,1 −G ) * . Thus, if inversion and time-reversal symmetries exist simultaneously, the coefficient c 0,1 G is purely real (imaginary) for the parity even (odd) state. Therefore, I R = I L always holds according to Eq. (6). When inversion is broken, the circular polarization (I R = I L ) can appear.
Supplementary Information
Table S1 Summary of reported g P L and g EL .
Scheme S1 Molecular structures of materials in Figure 2 analysis. Figure S1 Absorption and circular dichroism spectra. Figure S2 PL and absorption dissymmetry factor measured from opposite direction.
S1 Electric and Magnetic Dipoles in Plane-Wave Basis
In this section we explicitly derive the expressions of electric and magnetic transition dipoles in terms of the coefficients of plane-wave basis. Tha wavefunctions of HOMO/LUMO, i.e. ψ 0,1 can be expanded in the series of plane-wave functions as: |0, 1⟩ = have been used. The magnetic dipole ⟨0 + |m α |1 + ⟩ can be similarly derived as: The electric or magnetic dipole itself depends on the gauge choice of |0⟩ and |1⟩ but the products x 01 y 10 and r 01 · m 10 is gauge independent.
S2 CP and Orbital Angular Momentum (OAM)
In this section we relate the CP to an OAM-like quantity. The intensity defference between (rightand left-handed) circularly polarized light can be rewritten as: where ⟨L 1 z ⟩ 0 = ⟨0| x |1⟩ ⟨1| p y |0⟩ − ⟨0| y |1⟩ ⟨1| p x |0⟩ is an OAM-like variable and ⟨· · · ⟩ 0 means the expectation value at state |0⟩. It should be noted that Eq. (S5) is a gauge-invariant form of CP-EL which is different from the gauge-dependent magnetic dipole m 01 z or m 00 z in Eq. (S4).
|
2022-05-19T12:19:59.976Z
|
2022-01-01T00:00:00.000
|
{
"year": 2022,
"sha1": "03bfb7c0cc1ec243591c395e4d1e0e9c299c27ef",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "03bfb7c0cc1ec243591c395e4d1e0e9c299c27ef",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
12533023
|
pes2o/s2orc
|
v3-fos-license
|
Augmentable Paraphrase Extraction Framework
Paraphrase extraction relying on a single factor such as distribution similarity or translation similarity might lead to the loss of some linguistic properties. In this paper, we propose a paraphrase extraction framework, which accommodates various linguistically motivated factors to optimize the quality of paraphrase extraction. The major contributions of this study lie in the augmentable paraphrasing framework and the three kinds of factors conducive to both semantic and syntactic correctness. A manual evaluation showed that our model achieves more successful results than the state-of-the-art methods.
Introduction
Paraphrasing provides an alternative way to express an idea using different words. Early work on paraphrase acquisition has been mainly based on either distributional similarity (e.g., Lin and Pantel, 2001) or the pivot-based approach (e.g., Bannard and Callison-Burch, 2005). Both methods have their strengths and limitations. Distributional similarity is capable of extracting syntactically correct paraphrases, but may risk including antonymous phrases as paraphrases. On the other hand, the pivot approach has the advantage of preserving semantic similarity among the generated paraphrases; however, the quality and quantity of the paraphrases closely correlates with the techniques of bilingual phrase alignment.
Considering single factors, existing paraphrasing methods could lose some linguistic properties. In view of this, we attempt to differentiate the importance of the paraphrase candidates based on various factors. In this paper, we take a graphical view of the paraphrasing issue. To achieve the goal mentioned above, we adopt the Weighted PageRank Algorithm (Xing and Ghorbani, 2004). English phrases are treated as nodes. The edge weights are determined by various factors such as semantic similarity or syntactic similarity between nodes. It means that the performance of the ranked paraphrase candidates depends on the factors we selected and added. In other words, our framework is augmentable and is able to accommodate various factors to optimize the quality of paraphrase extraction.
In this case, we propose three linguistically motivated factors to improve the performance of the paraphrase extraction. Lexical distributional similarity is used to ensure that the contexts in which the generated paraphrases appear are similar whereas syntactic distributional similarity is adopted for the purpose of maintaining the syntactic correctness. Translation similarity, one more factor, is capable of preserving semantic equivalence. These three selected factors adopted together effectively achieve better performance on paraphrase extraction. The evaluation shows that our model achieves more satisfactory results than the state-of-the-art pivot-based methods and graph-based methods.
Related Work
Several approaches have been proposed to extract paraphrases. Earlier studies have focused on extracting paraphrases from monolingual corpora. Barzilay and Mckeown (2001) determine that the phrases in a monolingual parallel corpus are paraphrases of one another only if they appear in similar contexts. Lin and Pantel (2001) derive paraphrases using parse tree paths to compute distributional similarity. Another prominent approach to paraphrase extraction is based on bilingual parallel corpora. For example, Bannard and Callison-Burch (2005) propose the pivot approach to extract phrasal paraphrases from an English-German parallel corpus. With the advantage of its parallel and bilingual natures of such a corpus, the output paraphrases preserve semantic equivalence. Callison-Burch (2008) further places syntactic constraints on extracted paraphrases to improve the quality of the paraphrases. Chan et al. (2011) use monolingual distributional similarity to rank paraphrases generated by the syntactically-constrained pivot method.
Recently, some studies take a graphical view of the pivot-based approach. Kok and Brockett (2010) propose the Hitting Time Paraphrase algorithm (HTP) to measure the similarities between phrases. Chen et al. (2012) adopt the PageRank algorithm to find more relevant paraphrases that preserve both meaning and grammaticality for language learners. In this paper, we, similarly, present the state-of-the-art approach as a graph. However, unlike Kok and Brockett (2010), we treat English phrases (instead of multilingual phrases) as nodes. On the other hand, different from Chen et al. (2012), our model is augmentable by involving varied linguistic information or domain knowledge.
Method
Typically, the state-of-the-art paraphrase extraction models only deal with single factors such as distribution similarity or translation similarity. However, different linguistic factors could facilitate the paraphrase extraction in various ways. With this in mind, we propose an augmentable paraphrase extraction framework based on a graph-based method, which can be modeled with multiple linguistically motivated factors.
In the following section, we describe the graph construction (Section 3.1). Then the paraphrase extraction framework is outlined in Section 3.2. Section 3.3 introduces the three factors we proposed for optimizing the quality of paraphrase extraction. Finally, we utilize the grid search method to fine-tune the parameters of our model.
Graph Construction
We transform the paraphrase generation problem into a graph-based problem. First, we generate a graph G≡(V,E), in which an English phrase is a node v ∈ V and two nodes are connected by an edge e ∈ E. A set of paraphrase candidates CP={ ! , ! , … , ! } is generated for a query phrase q from a bilingual corpus based on the pivot method (Bannard and Callison-Burch, 2005). We further generate a set of transitive paraphrases CP'={ ′ ! , ′ ! , … , ′ ! } of the phrase q, namely, paraphrases ! and their paraphrases ′ ! in the same manner. We truncate the paraphrase candidates whose translation similarities are smaller than the threshold ε 1 ; we also exclude ! that consists only of a stopword or contains q or is contained in q. Thus, some noisy paraphrases are easily eliminated.
Consider the example graph for the query phrase "on the whole" shown in Figure 1. We first find its set of candidate paraphrases CP, including "generally speaking", "in general", "in a nutshell", using the pivot-based method mentioned above. Then for each phrase in CP, we extract the corresponding paraphrases respectively. For example, "in brief", "broadly speaking", "in general" are paraphrases of the first phrase "generally speaking" in CP. During the process, we keep the extracted paraphrases whose translation similarities are larger than δ 2 . By linking the phrases with their transitive paraphrases, the graph G is created.
Augmentable Paraphrase Extraction Framework
In this sub-section, we propose an augmentable paraphrase extraction framework, which can be modeled by multiple factors. Considering a graph G ≡ (V,E), the PageRank algorithm assigns a value PR to each node as their importance measurement. We further adopt the Weighted PageRank algorithm (Xing and Ghorbani, 2004) to state the relatedness between nodes. We calculate the weight of the edge which links node v to node u using various factor functions ℱ ! , the weight function is described as follow, We set ε=0.01. 2 We set δ = 0.0001. 2 We set δ = 0.0001.
where q is a query phrase, ℱ ! , , is a factor function and ! is the weight of the factor. The weighted PR value of a certain node u is defined iteratively as: is a set of nodes that point to u.
Linguistically Motivated Factors
Our model enables linguistically motivated factors to optimize the performance of paraphrase extraction. In this sub-section, we introduce three decisive factors: lexical distributional similarity, syntactic distributional similarity and translation similarity.
Lexical distributional similarity factor
Lexical distributional information is to ensure that the contexts in which the generated paraphrases appear are similar. For each phrase p in G, we extract three kinds of context vectors, ! , ! , !" and calculate vector similarities. Vectors ! and ! represent two sets of adjacent words which occur in the left and right of p respectively. Words appear simultaneously in both left and right sides of p are also extracted as the feature vector !" . Each item in vectors is an associated score calculated by pointwise mutual information of the phrase p (Cover and Thomas, 1991).
Given the query phrase q, for each paraphrase candidate u in G, we calculate the cosine similarity of the context vectors, ! , ! , !" between q and u. That is, three factors ℱ , ℱ and ℱ are described as a cosine similarity function: where ! ! denotes a context vector of u, and ! ! a context vector of q and k ∈ { , R, LR}.
Syntactic distributional similarity factor
Calculating the extrinsic syntactic similarity between nodes is used to maintain the syntactic correctness of the generated paraphrases. For each phrase p, we extract three vectors ! , ! , !" , which represents the <POS tag, frequency> pairs that appear on the left, right and both left and right sides of the phrase p. We use the GENIA tagger to obtain POS tags surrounding the phrase p. Each item in vectors is paired with the frequency of the corresponding tag. For each paraphrase candidate u of the query phrase q, we calculate the similarities ℱ , ℱ and ℱ between the vectors of u and q using cosine similarity.
where ! ! denotes a vector of u, and ! ! a vector of q, and k ∈ { , R, }.
Translation similarity factor
Next, we calculate the intrinsic translation similarity which is capable of preserving semantic equivalence. Translation similarity factor for an edge connecting node and is defined as: where is one paraphrase of phrase , T(v) denotes a set of the foreign-language alignment of v, and P(.) the translation probability. Both of the alignment and translation probability are described in Och and Ney (2003). "in a nutshell " "on the whole" "generally speaking" "in general" "in brief" "broadly speaking" CP q
Parameter Optimization
Once the factors are selected, we have to determine the weights of the factors, (i.e., ! in Section 3.2). In other words, we train the weights of factors such that the performance is optimal for a given developing data set. We use Discounted Cumulative Gain (DCG) (Järvelin and Kekäläinen, 2002) to measure the quality of paraphrases. From the top to the bottom of the result list, the DCG score is accumulated with the gain of each result discounted at lower ranks. The DCG score is defined as: where r represents a set of manually labeled paraphrase scores, c is a set of paraphrases to be evaluated, and ! is the paraphrase score at rank i of c.
The parameters 3 are selected in order to maximize the DCG scores in a total of S query phrases from the developing data set: where is a set of paraphrases of the query phrase ! , extracted from our model under the parameter values ! ! . In the process, we first assign each parameter a random value ranging from 0 to 1 and use a grid-based line optimization method to optimize the parameters. While optimizing a parameter, we maximize the parameter of certain dimension while the parameters of other dimensions are fixed. The process stops when the values of the parameters do not change in two iterations.
Experimental Setting
In this paper, we adopted the Danish-English section (containing 1,236,427 sentences) of the Europarl corpus, version 2 (Koehn, 2002) for computing distributional similarity and translation similarity. Word alignments were produced by Giza++ toolkit (Och and Ney, 2003). We randomly selected 50 phrases as the developing set for optimizing parameters. For each phrase, three distinct sentences which containing the phrase are randomly sampled. A total of 6073 paraphrases have been labeled score 0 (incorrect), 1 (partially correct), and 2 (correct) by considering the fluency of each sentence for developing optimization.
We evaluated the paraphrase quality through a substitution test. We randomly selected 133 most commonly used phrases from 30 research articles. For each phrase, we extracted the corresponding paraphrase candidates and evaluated its top 5 candidates. At the same time, three or less distinct sentences containing the phrase were randomly sampled (a total of 398 sentences were evaluated) from the New York Times section of the English Gigaword (LDC2003T05) to capture the fact that paraphrases are valid in some contexts but not others (Szpektor et al., 2007). Two native speaker judges evaluated whether the candidates are syntactically and semantically appropriate in various contexts. They assigned two values corresponding to the semantic and syntactic considerations to each sentence by score 0, (not acceptable), 1 ("acceptable") and 2 ("acceptable and correct"). The inter-annotator agreement was 0.67.
It is worth noting that we include two measurement schemes for comprehensive analysis. The strict scheme considers a paraphrase as "correct" if and only if both of the two judges scored 2 points, whereas the other one considers a paraphrase as "acceptable" if it is given scores of 1 or 2.
Experimental Results
We compared the performance of the five models, SBiP, SBiP-MonoDS, GB, APF-avgW and APF, using the precision, coverage, MRR and DCG. Because the number of paraphrases generated by SBiP, SBip-DS (101 phrases) and GB, APF-avgW, APF (131 phrases) are varied, we decided to analyze the results of 99 phrases involving 295 sentences which were generated by all five models. Top-k precision indicates the percentage of the sentences in which correct paraphrase(s) appear in the top-k paraphrase candidates. The coverage was measured by the number of sentences in which at least one out of five paraphrases is correct within all 398 sentences. Table 1 shows the results of precision and coverage in overall consideration. As can be seen, the APF achieved higher precision and coverage than the other four methods.
Additionally, we evaluated the results using MRR. MRR is defined as a measure of how much effort needed for a user to locate the first appropriate paraphrase for the given phrase in the ranked list of paraphrases. As shown in Table 2, the APF model performed better than the other models in both correct and acceptable measures. Moreover, Table 3 showed that the APF model outperformed the other models in both correct and acceptable measures based on either overall or individual consideration. DCG comprehensively considers both the number of good quality paraphrases and the ranking of these paraphrases. Overall, the APF model achieved better performance in paraphrase extraction. Table 3. DCG scores of the five models. Note that the former value indicates correct measures and the latter acceptable measures.
Conclusion
In this paper, we propose a paraphrase extraction framework.
Accommodating various linguistically motivated factors, the framework is capable of extracting better paraphrases carrying linguistic features. The results of the manual evaluation demonstrated that the proposed methods achieved performance improvement in terms of precision, coverage, MRR and DCG. The optimized parameters show that the lexical and syntactic distributional similarity factors make a substantial contribution to our model. Specifically, the words as well as the POS tags appear in both left and right sides show satisfactory performance.
However, some further analyses could be conducted in the future. Although the weights of parameters carry the linguistic properties, the proposed factors could be considered separately for examining and comparing the individual effectiveness in our framework. On the other hand, other factors could be taken in consideration. For example, parsing information could be added to the framework to investigate whether or to what extent it contributes to the paraphrasing task.
|
2015-06-05T01:59:53.000Z
|
2013-10-01T00:00:00.000
|
{
"year": 2013,
"sha1": "d648a47203ee5288c6c5401bc86d780aa6a84671",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "ACL",
"pdf_hash": "d648a47203ee5288c6c5401bc86d780aa6a84671",
"s2fieldsofstudy": [
"Computer Science",
"Linguistics"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
171422105
|
pes2o/s2orc
|
v3-fos-license
|
William Tyndale and Erasmus on How to Read the Bible: A Newly Discovered Manuscript of the English Enchiridion
British Library MS Additional 89149, newly discovered in 2015 at Alnwick Castle, is a previously unknown translation of Erasmus’ Enchiridion militis Christiani into English. Dated 1523, it now represents the earliest surviving English translation of any work by Erasmus. This article presents detailed verbal evidence that associates the vocabulary of imitatio in the translation with William Tyndale’s hermeneutic work on scripture, including his New Testament of 1525–1526. It thus offers the strongest evidence to date of Tyndale's hand in the English Enchiridion, long the subject of scholarly enquiry. It also provides a fresh interpretation of Tyndale’s engagement with Erasmian humanism, and his position on disputes over literal and figurative senses in early Protestantism. At the heart of this is the distinctive English word ‘counterfeit’, the meanings of which are traced through a range of medieval and Renaissance sources, from Chaucer onwards.
Testament, with woodcuts of Evangelists and Apostles opening the gospels, and Romans, 1 Peter, 1 John, Hebrews, James, Jude, and Revelation. 9 Just as striking, although less well-known, is the relation of the trade in English Bibles to the Dutch-speaking world. The work of Guido Latré has shown how Tyndale's books from Antwerp (including editions of the Pentateuch (1530), and a revised New Testament (1534)), make more sense in the context of the Low Countries than they do of England. The Antwerp reprint by Christoffel van Ruremund of Tyndale's Worms New Testament appeared in the wake of the first complete Dutch Bible (based on Luther's German), and the first complete French Bible (by Jacques Lefèvre d'Etaples) from Merten de Keyser. 10 Antwerp was the true centre for the Bible in English for a decade, including not only Tyndale's work but also George Joye's Psalter (1534), the "Matthew Bible" of 1537 (edited by John Rogers), and Coverdale's New Testament (1538). 11 In addition to this material resemblance, whatever Daniell's assertions, Tyndale readily uses intermediary sources alongside Hebrew and Greek texts in making his versions. This is what a sensible translator does, surrounding herself with glosses, dictionaries, grammars, or alternative versions of the text. Antwerp was one of the best places in the world to find such aids. 12 Luther's German inflects Tyndale's usage even when we can also see him reacting to specific points of Hebrew grammar in Genesis. 13 His Pauline vocabulary of justification develops as a result of a careful knowledge of Luther's struggle between the Latin legal language of iustitia and his emerging German theology of Rechtfertigung. 14 More broadly, it can be shown that Tyndale learned to express himself in the vernacular partly by the experience of a decade living within German and Dutch multilingual communities. 15 While Daniell consistently, indeed actively, plays down Tyndale's Latinity, it can also be demonstrated, both directly, and indirectly via use of the Wycliffite translations. 16 I. An English book, called Enchiridion Tyndale, we are reminded, was never strictly speaking part of the Reformation in England. He left before it began, and died before he could join it. As a Reformer, he is better seen in German or Flemish guise: indeed, he was condemned in Vilvoorde not for translating the Bible into English but for local heresies. But can he be seen as part of what Shuger calls the "Renaissance Bible"? The crucial questions here concern Tyndale's relationship to Erasmianism, and to Erasmus in particular. 17 In 9 Herbert, Historical Catalogue, 1-2. 10 a formal sense, Tyndale used a copy of Erasmus's bilingual text, most likely from the third edition of 1522, as a working copy for his New Testament translation of 1525-1526. The Enchiridion militis christiani of Erasmus also has a seminal place in biographies of Tyndale from John Foxe to Daniell. 18 Foxe's Actes and Monuments (1563) describes Tyndale, after his university education, returning in 1522 to his native Gloucestershire to work as a tutor for a local landowner with connections at court, Sir John Walsh of Little Sodbury Manor. In which company, Foxe says: Amongest whome commonly was talke of learning, as well of Luther & Erasmus Roterodamus, as of opinions in the scripture. The saide Maister Tyndall being learned & which had bene a student of diuinitie in Cambridge, and hadde therein taken degree of schole, did many times therin shewe hys mynde and learnyng. 19 Tyndale's knowledge of "open and manifest scripture" both impresses his hosts and causes local controversy. Thereupon, Foxe says, "he did translate into English a book called as I remember Enchiridion militis Christiani. The which being translated, delivered to his master and Lady" (570). Some corroboration of Foxe's story, with additional detail, occurs in a document dated 1528, later found by John Strype, when the merchant Humphrey Monmouth was arrested for possession of heretical books and petitioned the King's Council. Monmouth related in a petition to Cardinal Wolsey that Tyndale had given him a copy of "an English book, called Enchiridion" four and a half years earlier. 20 Monmouth is understandably cagey about his part in this, and is careful to declare that he sought authority for possessing the book by sending it "to the abbess of Dennye at her request." Monmouth admitted also having "another copy of the same book, which a friar of Greenwich asked for." This copy he now believed was in the hands of John Fisher, Bishop of Rochester.
The Enchiridion was an early work of Erasmus, begun in 1499, and published in 1503 in a selection of Lucubratiunculae. In 1515 it was reprinted in an edition on its own by Thierry Mertens in Louvain, and again in 1518 by Johann Froben in Basel in revised form, with a preface to Paul Volz. The 1518 edition was deliberately promoted as part of Erasmus's New Testament strategy, which also saw in the same year the Ratio verae theologiae, an expanded version of the Methodus, the second of the prefaces of his 1516 New Testament. These two works were among the most frequently reprinted of Erasmus's works, and together constitute a literary theory of the Bible. An English translation of the Enchiridion was printed in London in 1533 by Wynkyn de Worde for John Byddel. 21 23 The most learned discussion is by Anne O'Donnell in her edition of the 1534 second edition for the Early English Text Society in 1981. She concluded, after a stylistic analysis, that "The internal evidence for Tyndale's authorship of the 1533 Enchiridion is no more conclusive than the external evidence." 24 New light on this question emerged in 2015 with the discovery of a manuscript in the collection of the Dukes of Northumberland; it was first listed as present at Alnwick Castle in 1872. 25 It consists of an English translation of Erasmus's Enchiridion in brown ink on paper, a large quarto (285 × 195 mm) comprising 145 leaves. An export licence was deferred, and enabled by gifts from the National Heritage Memorial Fund, the Friends of the British Library, the Friends of the National Libraries, and an anonymous donor. 26 In September 2015, the British Library announced its acquisition and gave it the new shelf mark of BL Add. MS 89149. 27 It is a fair copy of the text, with penwork initials, in a comely gothic cursive hand, entitled "A compendevs tretis of the sowdear of Crist called encheridion which Erasmus Roterodame wrote vnto a certen courtear a ffrende of his." There is a possibility that it is a presentation copy, and some markings suggest its possible presence in a printer's shop. However, it is not the copy used in the 1533 edition. It does not contain the letter to Volz, which prefaced Erasmus's work in most Latin editions after 1518, and is included in the printed English version of Wynkyn de Worde. There are other differences in wording and phrasing. The manuscript also contains a completely different set of marginal notes from the printed edition, copied in the same handwriting as the main text, and supplying an often subtle and sophisticated amplification (and sometimes commentary) on Erasmus's text. These divergences are unlikely to have been introduced by a printer. It therefore represents a hitherto unknown version of the English Enchiridion.
The colophon (Figure 1) is of exceptional value, as it reads "translated oute of the latten into englisshe in the yere of our lord god m l vcxxiii" [1523]. 28 The manuscript is therefore now the earliest known translation into English of any work by Erasmus, predating and must have been based on a parallel copy of some kind. At the same time, the changes show an intelligent intervention, and also introduce the Volz letter.
There is also a newly open question of who translated the manuscript version. The date of 1523 is, of course, exactly the year suggested in Foxe for Tyndale's version. There is no new external evidence linking the text to Tyndale. But there is one moment of textual detail of exceptional interest. It comes on the verso of folio 54, in an important passage in the Fourth Canon or "Rule" of Erasmus's work, where he discusses how to distinguish between the nature of good and evil, and how the reading of scripture contributes in this way to the moral life: Quaedam vero media, veluti valetudo, forma, vires, facundia, eruditio et his similia. Ex hoc igitur postremo genere rerum nihil propter se expetendum neque magis minusve adhibendae sunt, nisi quatenus conducunt ad summam metam. 29 Certain things, Erasmus says, are neither good nor bad, but indifferent in themselves, such as health, beauty, strength, eloquence; these things, he asserts, are neither to be sought after nor rejected on their own account, but should be judged only in so far as they contribute to a higher goal. In Wynkyn de Worde's printed version this is translated as: nothing ought to be desired/ for it selfe neyther ought to be vsurped more or lesse/ but as ferforthe as they make & be necessarye to y e chefe marke/ I meane to folow Christes lyuyng. 30 Those last words, "I meane to folow Christes lyuyng," are not in Erasmus's Latin, which reads: nisi quatenus conducunt ad summam metam ("except in so far as they lead to the highest goal"). The English version is not content with the bare Latin, and adds a gloss. While Erasmus makes a philosophical point about weighing up moral judgements, the English insists on a theological explanation: the "chefe marke" is Christ's living example.
The manuscript reading is subtly different, however ( Figure 2). Here the gloss is longer: Certen thinges verely be indifferent or betwene both of their owen nature neither good ne bad nother honest ne filthie. As helth bewtie strength facundynes connyng and such other. Of this last kinde of thinges therefore nothing ought to be desyred for it self nother ought To be vsurped more or lesse but as farforth as they shalbe necessary vnto the chieff marke I meane to the folowing or Cownterfetting of Cristes lyving. 31 "To counterfeit," in the sixteenth century as now, has a mainly pejorative sense. It means to make a fraudulent imitation of something with an intention to deceive, such as a false coin or a forged painting or document. 32 Yet there is also a sense, used by Chaucer in his translation of Boethius, of "to contrefeten" meaning "to be like, to imitate, simulate, resemble" (without implying deceit). 33 In a range of poems from The House of Fame to the Canterbury Tales, Chaucer also uses the word to mean "to imitate conduct," as when it is said of the Prioress in the General Prologue that she "peyned hire to countrefete cheere." 34 Most inventively, 29 Erasmus, Enchiridion militis Christiani, in Opera omnia, ed. J. While Chaucer is, of course, an exceptionally sensitive user of the language, positive senses of "counterfeit" are found in other literary sources, such as the beautiful fourteenth-century poem Pearl, the A Version of Piers Plowman ("Of alle maner craftus I con counterfeten") and Thomas Hoccleve's Regiment of Princes. 37 Nonetheless, it is in a powerfully original sense of the word that Tyndale comes to use it in translating 1 Corinthians 4: 15-16: In Christ Iesu / I have begotten you thorowe the gospell. Wherfore I desyre you to counterfayte me. 38 While Chaucer uses "contrefete" in both directions, as a word meaning to imitate or follow an example, good or bad, Tyndale goes much further, in using the word to be equivalent to the imitation of Christ. In this, he has to run directly against the grain of a peculiar religious sense of "counterfeit," seen for instance in the Homily "On Salvation" (1547). This refers to "a Ded, deuillishe, counterfeit, and feyned faith," as a shorthand for religious hypocrisy or false faith. 39 Similarly, Hugh Latimer compares the superstitious use of relics or images of saints to a "counterfaite" silver coin; and in Nicholas Ridley it comes to be a term (in a transferred sense) for a religious hypocrite of any kind. 40 Catholics, equally, used the word to describe Protestant hypocrites. While these references post-date Tyndale, the word was already a commonplace to mean a turncoat in religion in the fourteenth century, as in Richard Morris's Pricke of Conscience: "Þus sal anticrist þan countrefette Þe wondirs of God." 41
II. Counterfeiting Christ
Tyndale's word "counterfayte" shocks by reclaiming this territory for the imitation of Christ. It is, we note, exactly in line with the usage in the British Library manuscript version of the Enchiridion: "the folowing or Cownterfetting of Cristes lyving." The phrase in Paul's Greek which Tyndale translates as "counterfayte" is μιμηταὶ γίνεσθε. Paul uses the identical phrase in 1 Corinthians 11:1. In the Vulgate, the Latin word used here (as also in 1 Thessalonians 1:6) is imitatores; Erasmus follows this in rendering imitatores mei estote in his 1519 translation. followers of me." Tyndale follows the sense of imperative here, as from Luther's "Seid meine Nachfolger!" However, the peculiar inflection of "counterfayte" is different, and is emphasized in Tyndale's Prologe or preface vn to the pistle off Paul to the Romayns, produced at the same time as his ground-breaking translation, where Tyndale attempts to explain Paul's theological understanding of Christ's authorship of our salvation: "even so here setteth he hym forth as an ensample to counterfayte / that as he hath done to vs / even so shulde we doo one to another." 42 Tyndale's "counterfayte" is exactly the kind of daring usage Daniell admires, while also contradicting his rule of thumb that Tyndale prefers Saxon monosyllables. This word is not only Latinate (and French), but also inherently complex. To understand it we require not only a scriptural concordance of Pauline vocabulary, but a deeper understanding of Paul's literary and philosophical context. The earliest full analysis of this is in Erasmus's Annotationes to his Novum Instrumentum in 1516. Erasmus notices an ambiguity in Paul's meaning. On the one hand Paul asks us to become Christ, as if Christ says genui vos, "I have begotten you." On the other hand Christ urges us to copy him (imitemini me), using the language of imitation: "you shall have imitated or copied me, as if I had given birth to you" (id fiet si me parentem expresseritis). 43 It is an odd poem (in doubtful praise of Augustus) to quote in a commentary on 1 Corinthians, and an odd line, too, with a barely concealed irony about how a child's similarity to the mother cannot be so guaranteed in the father. However, the Horatian resonance deftly succeeds in taking us into the complex tradition of understanding the idea of imitation, what Plato and Aristotle called μίμησις, a philosophical tradition which lies at the heart of language theory and of how to understand works of art. This tradition merges with the medieval Latin idea of imitatioderived largely from Horace in De arte poeticaof how to follow an example, in moral as well as representational terms. The philosophical application of the classical theory of imitation to the Christian life begins in Erasmus's work well before the New Testament, in the Enchiridion, and continues afterwards into the various editions of Ratio verae theologiae (1518-1523). Firstly, in the Enchiridion, there is the familiar medieval theory of imitation as the following of a moral example: Alterius fidem, alterius imitare caritatem ("Imitate the faith of the one and the charity of the other"). 45 This is allied to a theory of how works of art and literature work on the mind, even to change behaviour in the same way: Peragatur in te, quod illic osculis repraesentatur ("Let what is represented there to the eyes be enacted within you"). 46 However, Erasmus also extends the representational part of the theory into much more sophisticated areas of μίμησις, involving classical examples such as Apelles in the visual arts, or Plato and Aristotle in theoretical frameworks. He freely mixes examples from the prophets or gospels with extended comparisons from Homer or Virgil. He then applies this theory of representation openly to the mystical tradition of the imitatio Christi. The Enchiridion is dotted with injunctions such as Christum facito in sanctis imiteris ("make sure you imitate Christ in his saints"). At this point, classical theory of imitation comes face to face with the devotional practice Erasmus knew from his youth in the low countries, via the fifteenth-century masterpiece De imitatione Christi (by Thomas of Kempen) and the devotio moderna.
It is therefore extraordinary to find that "counterfeit" is the English word used to convey these parts of the Enchridion in both the manuscript and the later printed versions. In her edition, O'Donnell glosses this word as a synonym for "to imitate." 47 "Counterfet the ones feith and the others charitie"; "Se thow counterfet Crist in his saintes." 48 These readings from the manuscript version are also adopted in the printed text to translate cognates of imitare. However, the word is also used to translate other words in Erasmus: "Cownterfet ye them not therefore. For your father knoweth whereof ye have nede afore ye desire it of hym" (where the phrase in Erasmus is Nolite ergo assimilari eis). 49 In relation to the imitation of Christ he translates in carne et sermone tradidit et moribus expressit as "Crist here in his body taught w th his owen mouth and doctryne and expressly presented or counterfettid in his maners and lyving miracles here." 50 The translator has therefore noted how Erasmus uses a range of verbsimitare, assimilare, exprimereto create a theory of representation. Perhaps most interesting of all, however, is how the translator uses the same word "counterfet" to express Erasmus's meaning when it is at its most exploratory and inventive.
Prominent among such places is a passage where Erasmus discusses the relationship between the exterior and the interior aspects of the soul via a citation from Virgil's Georgics: Tum variae illudent species atque ora ferarum. Fiet enim subito sus horridus atraque tigris squamosusque draco et fulva cervice leaena, aut acrem flammae sonitum dabit. 51 Here the translator in the manuscript version praises "the excellente connynge poet Virgill," but rather than attempt a verse translation, instead paraphrases 46 51 "But when you hold him in the grasp of hands and fetters, then will manifold forms that baffle you, and figures of wild beasts. For of a sudden he will become a bristly boar, a deadly tiger, a scaly serpent, or a lioness with a tawny neck; or he will give forth the fierce roar of flame"; Virgil, Georgics iv.406-9, Latin text and translation from Virgil, Works, vol. 1, ed. H.R. Fairclough, Loeb Classical Library (London: William Heinemann, 1920), 224-5; cited in Enchiridion, in Ausgewählte Werke, ed. Holborn, 51. Virgil's meaning: "dyuerse symylitudes and ffassions of wilde bestes shock the for sodenly he wilbe a ferefull swyne and a foule tigre and a dragen full of scales and a lyone w th a red mane." 52 In the commentary in the right-hand margin, the translator explains this as a reference to Prometheus (he means of course Proteus), who "changeth hym selff to all maner facions"; back in the text, he explains this as the power of poetic imitation, which he interleaves via a further translation from the poem, which possesses such power as "shall counterfet the quyk sownde of the flame of fire" (Figure 3, fol. 37 r ). This improvised use of the word "counterfet," repeated in the printed version of the English Enchiridion, has no equivalent in Erasmus's text. 53 In relation to another passage later in the Enchiridion, in the Sixth Rule, the translator shows how he has also learned to apply the word "counterfet" in a philosophical sense. Here Erasmus discusses a position taken in Plato's Republic but disputed in Aristotle's Nicomachean Ethics, that virtutem nihil aliud esse quam scientiam fugiendorum atque expetendorum. 54 The printed version of 1533 glosses this as a person choosing between two paths: them "that folowe vertue and shall accompte them that do otherwyse worthy to be lamented and pityed / & not to be counterfayte or folowed." 55 It is clear that "counterfayte" has become a key word to capture the elusive quality of Erasmus's theory, especially as applied to the imitation of Christ. How can the human truly follow the pattern of the divine? And how does this occur through reading a text, such as the gospels or epistles? Here, the reading in the printed version ("to be counterfayte or folowed"), as earlier in the manuscript ("the folowing or Cownterfetting of Cristes lyving"), is especially striking. It combines, perhaps ambivalently, the two halves of the theory of imitation: to follow an example and to make an identical copy. If there is a sense of ambivalence, it is hardly surprising. In this early stage of Erasmian theory, before De copia, never mind the New Testament prefaces, Erasmus puts together a neo-Platonic essay on the soul and the body; with a rhetorical theory of poetic similitude; with an ethical theory of imitation in human behaviour; with an at times rational and at times rapturous account of imitatio Christi. Any reader would be excused in feeling confused. It must be admitted that the English translator, with no readily available technical vocabulary, counterfeits one remarkably well.
Who could this translator be? An answer might be found in the fact that this phrase, "to counterfeit and follow" is found (outside of these instances in the translation of the Enchiridion) just three times in English before 1537: all three are in Tyndale's controversial work, The parable of the wycked mammon. Each time he uses "counterfet and folowe" as a technical term for a process of imitation. In one case, he uses the term to describe a negative processa form of behaviour that we imitate, which leads to unrighteousness and sin: But he also uses it of positive examples of Christian living: "But & if ye counterfette and folowe God in well doinge then no doute it ys a sygne y t the spyrite of God ys in you & also the favoure of God." 57 Most strikingly, it describes the process by which a Christian is led to imitation of Christ: "Every Christen man ought to have Christ all ways before his eyes / as an ensample to counterfaite & folowe / & to do to his neyboure as Christ hath done to him." 58 A short guide to the parts of speech in 1537 uses "I folowe or counterfeyte the" as equivalent to Emulor te, idest, imitor, which shows some cross-fertilization between theological and rhetorical usage. 59 But nowhere else is this phrase used in this way until the 1540sinterestingly enough, in another English translation of Erasmus, this time the Paraphrases, in the version prepared by Nicholas Udall. 60 While no internal textual evidence can finally prove the authorship of a text, it is not easy to imagine this English verbal pattern for the theory of imitation happening twice, independently, in the 1520s. We are left with two possibilities: either Tyndale is the translator behind the English Enchiridion, or else he was one of its earliest readers. Given the external witness provided by Humphrey Monmouth, the simplest explanation is that the translation is his.
The dating of the manuscript in 1523 also provides an explanation for the remarkable use of "counterfayte" in the 1526 New Testament. It shows Tyndale's Erasmianism definitively at work before he left England. As for which version of the English Enchiridion is more exactly histhe manuscript or the 1533 textthe best answer is neither. The phrase "counterfeit and follow" is used in both witnesses, but in every instance against the reading of the other. Another copy may have once existed which used the phrase more consistently; or else a later copyist may have done some incomplete tidying. Nevertheless, the phrase is a kind of Tyndalian signature, a shorthand encapsulating a key scriptural concept. In any event, the single word "counterfeit" is used over a dozen times in each version, always in this case corroboratingly. We can see this as a highly important and complex attempt to create an English Erasmian language, developed to account for a radical theory of scriptural interpretation. Indeed, the oddity of the phrase "folowing or Cownterfetting" registers the translator's uncertainty in fully understanding it. The word counterfeit is already strange in context, added to by the hendiadys, in which two terms are co-joined without quite overlapping. 61 To explain this fully, we need to consider further the concept of imitation in Erasmus. For at the heart of Erasmus's method is a bold alignment between a literary theory of imitationthe ancient hermeneutic idea of μίμησις, and the moral and theological concept of imitation. This also enables us to reconsider the question of what kind of reader Tyndale is of Erasmus, and a posteriori, of his Bible. 57 The parable of the wicked mammon, sig. D3 r . 58 The parable of the wicked mammon, sig. F5 r . 59 Certayne briefe rules of the regiment or construction of the eyght partes of speche (London: T. Berthelet, 1537), sig. C1 v . 60 Jesus "ordeyned a patarne or an example in hymselfe, for vs to counterfayte and folowe"; The first tome or volume of the Paraphrase of Erasmus vpon the Newe Testamente ( [London]: Edwarde Whitchurche, 1548), sig. B4 r . 61 O'Donnell comments on the "doublet" as a feature of the 1533 version, xliii-iv.
III. Tyndale and the Renaissance Bible
By 1530, Tyndale's explicit references to Erasmus tended to be less than flattering. In "W.T. to the Reader," Tyndale's preface to the Pentateuch, Tyndale made a Lutheran joke in noting how Erasmus's wit turns little gnats into huge elephants. 62 And in making An Answere vnto Sir Thomas Mores Dialoge, Tyndale could not resist a little swipe at More's "darling" Erasmus: But how happeth it that M. More hath not contended in lyke wise agaynst his derelynge Erasmus this long while? 63 Yet even the sentence making fun of Erasmus acknowledges his authority, citing the Encomium Moriae against More. For if the work were translated into English, Tyndale says (The Praise of Folie was not printed in English until 1549) everyone would see how far More had changed from his humanist youth. It is clear Tyndale has read the Encomium in Latin. Indeed, the phrase "derelynge Erasmus," shows that Tyndale was also reading the Opus epistolarum of Erasmus in Latin, in which More keeps using the phrase Erasme charissime. 64 Erasmianism (and humanist Latinity) inflect even Tyndale's most famous phrase: This comes directly from Erasmus. In Paraclesis, the preface to the Greek New Testament in 1516, Erasmus declared how he disagreed with those unwilling for holy scripture to be translated into the vulgar tongue, as if Christ taught doctrines that could scarcely be understood by theologians, or as if the strength of the Christian religion consisted in people's ignorance of it. "If only," Erasmus continued rapturously, "the farmer would sing parts of scripture at the plough (ad stivam aliquid decantet agricola), the weaver hum them to the movement of his shuttle, the traveller lighten the weariness of his journey with like stories." 66 Tyndale's direct knowledge of Paraclesis is shown by The Obedience of a Christen Man (1528). 67 The truly radical claim of Paraclesis is that anyone can read the Bible. Weavers are readers first, even before they are believers. A labourer or a weaver (a fossor or a textor) can be a true theologian, Erasmus declares, as long as he teaches and expresses in his own life the philosophia Christi. Some among the learned call this philosophy crassula et idiotica ("a bit stupid and vulgar"). These people even think this philosophy is illiterata; but Erasmus responds that it has "drawn the highest princes of the world to its laws, an achievement which the power of tyrants and the erudition of philosophers cannot claim." 68 Here, Erasmus reverses the cliché among the church fathers that the New Testament is inferior in style to the literature of the ancients. Simultaneously, he elevates the gospels to the highest expressions of human writing (litterae hominum). "Why," he asks, "have we steadfastly preferred to learn the wisdom of Christ from the writings of men than from Christ himself?": Cur statim malumus ex hominum litteris Christi sapientiam discere quam ex ipso Christo? 69 Would that princes, priests or schoolmasters, he avers, teach this vulgar doctrine rather than the subtleties of Aristotle or Averroes. 70 For the new philosophy consists in reading the litterae of Christ: Platonicus non est, qui Platonis libros non legerit; et theologus est, non modo Christianus, qui Christi litteras non legerit? 71 Erasmus slips in the phrase litterae Christi almost without us noticing. Indeed he does so with conscious literary play, since just a moment before, he said that philosophia Christi was illiterata by the standards of the eloquent. He plays with nouns such as veritas and sapientia, and adjectives such as eruditus and antiquus, in such a way that we do not know quite where we are any more. Christian doctrine is less subtle than Aristotle, but wiser; less eloquent than classical literature, but also as antiquus and as beautiful as Plato. Humanist values of classical literature and eloquence are both appealed to and overturned by the transformative power of Christ's writings. The result is that, against the prejudices of theologians and humanists alike, litterae Christi are proclaimed as an ultimate form of literature. 72 In 1529, an English translation of An exhortation to the diligent studye of scripture, made by Erasmus Roterodamus appeared in Antwerp, probably translated by George Joye. By 1536 it was being used as a preliminary to a reprint of Tyndale's New Testament. 73 It appears, then, that early readers of Tyndale had no problem assimilating him with Erasmian humanism. 74 This does not mean they always understood the radical claims Erasmus was making. Joye's own translation of litterae Christi shows him tempering its edge and making it a conventional appeal to "scripture": We can not calle eny man a platoniste / vnles he have reade the workes of plato. Yet call we them Christen / yee and devines/ whiche never have reade the scripture of Christe? 75 The translation shows us the distance between Erasmus referring to scriptura and to litterae. Erasmus is telling his readers not only to read scripture but to read it differentlythat is, in the way they would any other ancient writer. Joye cannot avoid this inference in the extraordinary peroration to Paraclesis, when Erasmus compares reading Christ's writing as equal or better to meeting him in person: at hae tibi sacrosantae mentis illius vivam referunt imaginem ipsumque Christum loquentem, sanantem, morientem, resurgentem, denique totum ita praesentem reddunt, ut minus visurus sis, si coram oculis conspicias. 76 This is Joye's version: But the evangely doth represent and expresse the qwicke and levinge ymage of his most holy minde / yee and Christe him silf speakinge / healinge / deyenge / rysinge agayne / and to conclude all partes of him. In so moch that thou couldeste not so playne and frutefullye see him / All though he were presente before thy bodlye eyes. 77 Erasmian humanism and English evangelism come face to face in this paragraph. This is the heart of Erasmus's argument about what specially characterizes the New Testament as a literary text and gives it its literary value. This is expressed as a form of imitatio. The New Testament provides us with the person of Christ by a process of literary imitation: in his litteris praecipue praestat, in quibus nobis etiamnum vivit, spirat, loquitur ("he stands forth especially in this writing in which he lives for us even at this time, breathes and speaks"). 78 It is in this context that the English vocabulary for Erasmian imitation in the 1523 manuscript version of the Enchiridion becomes newly significant. Part of the reason for O'Donnell's caution in identifying Tyndale as the translator of the 1533 printed text is her analysis of Tyndale's theological vocabulary, particularly as it relates to the controversy with More over words such as "congregation" and "love." Here she notes that the 1533 text prefers the traditional terminology of "church" and "charity." At best, she says, the 1533 Enchiridion shows "an earlier stage of his development both theologically and stylistically." 79 One reading in the manuscript text may indeed represent an earlier stage even than the 1533. In the discussion of the death of the body in Chapter 1, the manuscript gives: "For verely god is the liff of the soule / and where god is there charitie is & compassion of thy neyghbour." 80 In the 1533 version, this becomes: "bycause her lyf is away / that is god. For veryly where god is / there is charite / loue & compassyon of thy neyghbour / for god is that charite." 81 Is the introduction of "loue" here evidence of second thoughts, either in another manuscript, or even by an editor of the printed text under the influence of Tyndale's New Testament? In this respect, it is also surely interesting that the manuscript Enchiridion is now the earliest recorded usage of the distinctive phrase "filthy lucre," a distinctive part of Tyndale's vocabulary and quickly a proverbial phrase in early modern English. 82 76 Paraclesis, in Ausgewählte Werke, ed. Holborn, 149. 77 An exhortacyon to the diligent studye of scripture, sig. A5 r . 78 Ausgewählte Werke, ed. Holborn, 146; tr. Olin, Christian Humanism, 105. 79 Enchiridion, ed. O'Donnell, liii. 80 London, British Library Add. MS 89149, fol. 6 r . 81 A booke called in latyn Enchiridion militis christiani, sig. A7 r . 82 London, British Library Add. MS 89149, fol. 69 r ; Early English Books Online records over 2500 instances. The MS is also the first recorded use of "jote and tittle" (fol. 14 v ). The phrase "wicked mammon," used in the printed Enchiridion (ed. O'Donnell, 99), is rendered in the MS as "the dyvell of Innyquytie" (fol. 56 r ).
In any event, there is a reason to give a more positive valuation of O'Donnell's judgement that "the English Enchiridion may represent Tyndale's apprentice-work as translator" (liii). For the manuscript contains, we have seen, a sophisticated English philosophical language. If the imprint of this marks the vocabulary of counterfeiting Christ in Tyndale's New Testament, a broader version is evident in the manuscript Enchiridion: Next is the spirit wherein we represent the symylitud of the nature of god in whiche also oure most blessed maker after the orygynall patorne or example of his owen mynde hath graven the eternall lawe of honesty w th his fynger that is so swete w th his spirit the holie goste. 83 What does it mean to call the spirit a "symylitud of the nature of god"? Here we need to come to terms with the complex approach to figurative language in the Enchiridion. The section in question, entitled De tribus hominis partibus, spiritu et anima et carne ("On the three parts of man: spirit, soul, and flesh"), combines a neo-Platonic analysis of the physical human being, with a figurative account of metaphysical process, in which the division between flesh and spirit is imagined (as in Plato) as a conflict between divine likeness and the "brute animal," with "the middle soul between the two," mediating between like and unlike. 84 "Do you wish me to point out the distinction between these parts in more concrete language?" Erasmus asks, pointedlyquoting from the Satires of Horace (2.2.3) in the process.
How do we think Tyndale responds to this? It certainly does not fit the received view of Tyndale's approach to scripture, such as in The Obedience of a Christian Man: arme thy selfe to defende the with all / as Paul teacheth in the last chapter to the Ephesians. Gyrde on the the swerde of the spirite which is Gods worde and take to the the shilde of fayth / which is not to beleve a tale of Robyn hode or Gestus Romanorum or of the Cronycles / but to beleve Gods worde that lasteth ever. 85 Reading God's word is manifestly different from reading other literary works, whether the Gesta Romanorum (a loose anthology of thirteenth-and fourteenthcentury tales, including religious ones, used as a source by Chaucer and Shakespeare) or else legendary fictions like Robin Hood. The Enchiridion (as we have seen at length) is littered with references to Horace, or Virgil, which the English versions make considerable efforts to master. One of these cases is the section on Christian imitation, where the Georgics is made into a powerful leading metaphor of the Protean struggle between body and soul; another is the account of death in the opening chapter, with an extended comment on the cruel violence extended by Achilles to the body of Hector, before "the walles of troy," as the marginal comment in the manuscript helpfully adds. 86 To accommodate these classical 83 Translating: Spiritum vero, qua divinae naturae similitudinem exprimimus, in qua conditor optimus de suae mentis archetypo aternam illam honesti legem insculpsit digito, hoc est spirito suo (Ausgewählte Werke, ed. Holborn, 52). 84 CWE, 66: 51. 85 The obedience of a Christen man, sig. T4 r . 86 London, British Library Add. MS 89149, fol. 5 r . sources into the methodology of Tyndale is a large stretch, yet we need at least to acknowledge his evident familiarity with the arguments of Erasmus both here and in Paraclesis, where he says that the litterae Christi are like other forms of litterae, to be valued all the more highly than litterae hominum. Well before the Protestant rallying cry of sola scriptura, Erasmus adopts the more sensational principle of sola littera: we know Christ by his writings best of all. Yet is classical rhetoric so foreign to Tyndale as we think? One of the recommendations he took to Cuthbert Tunstall in 1523 was a translation he had made from an oration of Isocrates. 87 Isocrates is one of the masters of what Shuger calls "The Christian Grand Style in the English Renaissance." 88 We also need to see how central arguments about figuration are to Erasmus. This leads, in the Fourth Rule of his handbook, to a crucial statement on Biblical interpretation: Litteras amas. Recte, si propter Christum. Sin ideo tantum amas, ut scias, ibi consistis, unde gradum facere oportebat. Quod si litteras expetis, ut illis adiutus Christum in arcanis litteris latentem clarius perspicias, perspectum ames, cognitum atque amatum communices aut fruaris, accinge te ad studia litterarum. 89 Reading scriptures leads to the knowledge of Christ. It does so by a process familiar from ancient literary theory: Christ is represented in scripture, like the mystery of meaning itself, in the way that knowledge is figured within words. The hidden Christ is revealed in litterae. In the Fifth Rule which follows, Erasmus gives a fuller explanation, comparing the relationship of hidden and revealed truth to the relationship between figurative and literal meaning: Idem observandum in omnibus litteris, quae ex simplici sensu & mysterio, tamquam corpore atque animo constant, ut contempta littera, ad mysterium potissimum spectes. 90 All literary works are made up of a literal sense and mysterious sense, body and soul. Erasmus's advice is "to ignore the letter and look rather to the mystery," which he backs up by reference to Homeric and Ovidian myths such as Prometheus, Circe, and Sisyphus, interspersed with the story of Adam in Genesis.
A complex figure is used by Erasmus to explain how figures work. In Plato's Symposium, he recalls, Alcibiades (the Athenian general and lover of Socrates) compares Socrates to those images of Silenus which enclose divinity under a lowly and ludicrous external appearance. This is true of any literature, and also applies to Scripture: Cuiusmodi sunt litterae poetarum omnium et ex philosophis Platonicorum. Maximo vero scripturae divinae, quae fere silenis illis 87 Daniell, Tyndale: A Biography, 87-8. 88 Sacred Rhetoric: The Christian Grand Style in the English Renaissance (Princeton: Princeton University Press, 1988), 14-16. 89 You love the study of letters. Good, if it is for the sake of Christ. If you love it only in order to have knowledge, then you come to a standstill. But if you are interested in letters so that with their help you may more clearly discern Christ, hidden from our view in the mysteries of the Scriptures, and then, having discerned them, may love him, and by knowing and loving him, may communicate this knowledge and delight in it, then gird yourself for the study of letters. (Ausgewählte Werke, ed. Holborn, 64; CWE, 66: 62) 90 "The same rule applies for all literary works, which are made up of a literal sense and a mysterious sense, body and soul, as it were, in which you are to ignore the letter and look rather to the mystery"; Ausgewählte Werke, ed. Holborn, 70; CWE, 66: 67.
Alcibiadeis similes sub tectorio sordido ac paene ridiculo merum numen claudunt (70). This receives the following translation in the manuscript English Enchiridion (Figure 4); "all maner of lerenyng," it is declared, "include in them selff a playne Sciens and a mistery": the literall sence litell regarded thow shuldest loke chiefly to the mistery of which maner ar the leturys of poyettes & of those philosophers which folowed plato but most of all holy Scriptures which as they were some salmes made of Alcibiades vnder a Rude and folissh covering include thinges pure divyne and all to gither godly. 91 The translation struggles to make sense of Erasmus's Latin, and of the shock of the ideas it contains. In the text, the "ymage of Adam" is described as an example of "alligory." In a moment of comic relief, either the translator or the scribe compares this to the "Psalms" of Alcibiades. But the marginal annotation in the manuscript makes a much better attempt: "Sileni be ymages," it is said, "which conteyn vtward the Symylitude of a fole"; yet "when they ar opened Sodenly apering Som excellent & mervilous thinge." The reference to Plato is explicated carefully: "for Socrates was so simple vtward & so excellent inwarde." The translation responds hand in hand to Erasmus's parallel theories of imitation and figuration. Earlier we saw the word "counterfeit" as an example of the struggle of an English translator to understand the complexity of Erasmus's Latin. Repeatedly, the word "counterfitt" is used to translate Latin cognates of imitatio. We can here see why. The Sileni Alcibiadis are one of the leading tropes in all Erasmus, subject to an elaborate commentary in Adagia III.iii.1, and a powerful discussion in the Praise of Folly. In the Sileni Erasmus identifies a metaphor for scripture itself, and at the same time a μίμησις of Christ. Yet if Tyndale is the translator, we have a final puzzle. Enchiridion states that in reading scripture we should prefer interpreters qui a littera quammaxime recedunt -"the literall sence litell regarded." The marginal note in the manuscript confirms: "The mistery must be lokid vpon in all maner lernyng." We need the allegory, as much in scripture as in pagan texts: "Ye peradventure a poyettes ffable in the alligory shalbe Redy w t somwhat more frute than a narracion of holie bokes iff thow shuldest Rest in the rynde or vtter part only." How do we explain, then, that in the Obedience, five years later, Tyndale states: "Thou shalt vnderstonde therfore y t the scripture hath but one sence which is y e literall sence," the very opposite of the lesson of the Enchiridion? 92 The appeal to the literal sense has become a fetish of reading Tyndale, both in Daniell and in a counter direction in James Simpson. 93 In the one, the literal sense is the pathway to truth and righteousness, in the other a virus, equivalent to "textual hatred." In each account, Tyndale is made the friend of Luther and enemy of Erasmus. The English Enchiridion is kicked into the dust as irrelevant juvenilia. Yet in that case, Tyndale has barely read his text. Time and again Erasmus states that the fundamental process of language is figurative: we substitute one way of saying for another. Scriptural language is not exempt from this; how could it be? In Ratio verae theologiae Erasmus lists hundreds of examples of figures of speech from the Bible. Christ himself, he says, loves figures of speech. This is central to the way scripture works, in terms of meaning, and in the impression it makes on our emotions.
What we need is to make subtler distinctions between allegory and figuration. A way forward is offered by Shuger: "In Erasmus, a basically rhetorical 91 This and quotations in the following two paragraphs are taken from London understanding of language takes the place of medieval allegoresis." 94 A similar point might be made about Luther. Try as he might, Luther cannot get rid of the figurative. To get round his difficulty, he sometimes says that figurative meaning is itself part of the literal sense. 95 This is a category error: literal and figurative are terms which only ever work in tandem, as two parts of something else. Yet in its way Luther's assertion is part of a longstanding debate in Christian exegesis about the turn of the literal, as in Hugh of St. Victor in the twelfth century: "We read the scriptures," they say, "but we don't read the letter. The letter does not interest us. We teach allegory." How do you read Scripture, then, if you don't read the letter? Subtract the letter and what is left? 96 Erasmus recognizes both sides of this argument better than anyone. But even Luther, when he disagrees with Erasmus, often does so on a point of figurative interpretation. In that way, he remains an Erasmian even as he rejects Erasmus.
Could it be that something similar happens in Tyndale? Tyndale in the Obedience improvises a similar argument, as he attempts to come to grips once more with the Erasmian inheritance. First, he makes a point straight out of Erasmus's Ratio: Never the later the scripture vseth proverbes / similitudes / redels or allegories as all other speaches doo / but that which the proverbe / similitude / redell or allegory signifieth is ever the literall sence which thou must seke out diligently. As in the english we borow wordes and sentences of one thinge and apply them vnto a nother and geve them new significacions. 97 This equivocation comes straight out of Luther: "but that which the proverbe / similitude / redell or allegory signifieth is ever the literall sence." This either makes a nonsense of the idea of the figurative in language, or begs the question of what we mean by the literal. But in the next sentence, Tyndale changes tack again, now giving a nice definition of the process of figuration: "As in the english we borow wordes and sentences of one thinge and apply them vnto a nother and geve them new significacions." In fact, this sentence is close to a translation from Erasmus's De copia, where he says that metaphor is "so called because a word is transferred away from its real and proper signification to one that lies outside its proper sphere." 98 It would be nice to think that Tyndale is translating this sentence directly, since Erasmus gives, as the most appropriate word for the Greek μεταϕορά, the Latin word translatio. All of language is metaphorical, Erasmus says; every act of making meaning and of interpreting meaning involves translation.
Tyndale knows this at some level, intimately. Everyone who reads his translations admires his feel for figurative language. In the Obedience, he recognizes figuration even as he disavows it, as when he says: "loke yer thou lepe / whose literall sence is / doo nothinge sodenly or without avisemente." 99 What is that sentence doing if not trying to come to terms with the slippage between different ways of meaning? Like Luther, part of what he is doing is to distinguish in interpretation between acknowledging figures of speech (figuration that is present in the text of the Bible), and what we might call interpretative allegorization, where a difficult passage in the Bible makes us reach for an alternative way of putting it. So he says: So when I saye Christ is a lambe / I meane not a lambe that beareth woll / but a meke and a paciente lambe which is beaten for other mens fautes. Christ is a vine / not that beareth grapes: but out of whose rote the braunches that beleve / sucke the sprite of lyfe and mercy. 100 This distinctly complex interpretation is plucked out of nowhere as the plainest of plain sense. A caveat follows, "Which allegories I maye not make at all the wilde adventures," 101 but this only serves to make allegory an even more wildly figurative process. He then goes on to give an example of interpretation in action, as he figures out what is meant by Peter cutting off the ear of the servant Malchus in John 18. Now we find Tyndale in free allegorical mode: "And of Peter and his swerde make I the law and of Christ the Gospell sayenge / as Peters swerde cutteth of the eare so doeth the law." 102 There is no better example of Tyndale the Erasmian. He asserts that there is only one sense while manifestly dealing with two and sometimes more. He denies the power of allegory while indulging in some imaginative allegorizing. He also declares that scripture is a different kind of text from any other literature: "Moare over if I coulde not prove with an open texte that which the allegory doeth expresse / then were the allegory a thinge to be gested at and of no greater value then a tale of Robyn hode." 103 Yet all the time he reads scripture exactly in the same way that he might approach any other literary text. In this way, the spirit of the Enchiridion still breathes in Tyndale even as he declares open war on Erasmian modes of interpretation. This shows the reach of Erasmus in the sixteenth century. It is not that Erasmus is the first person to read the Bible in a literary way. The rabbis were doing that as soon as Scripture was written down. Every medieval commentary, pace Erasmus, was using rhetorical methods from the Greek and Roman grammarians. 104 But Erasmus takes the leap of saying that reading is the key to the New Testament; and every reader by that fact is finding out for herself a philosophia Christi. Did Tyndale remember, as he translated the New Testament, Erasmus's plea for "the spirituall sens or knowledge of holy scripture"? 105 Erasmus here resists the appeal for meaning "only after the litterall sens." Tyndale formally insisted that there was only one meaning, the literal; but also assumed that to understand the literal, we need to look for the spiritual. In this powerful equivocation in the interchange between littera and spiritus, he could not help being still a true Erasmian.
|
2019-01-02T17:52:46.996Z
|
2018-01-02T00:00:00.000
|
{
"year": 2018,
"sha1": "63e7ad087a51111b6fbb6990ec740af3993b9c91",
"oa_license": "CCBY",
"oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/13574175.2018.1468605?needAccess=true",
"oa_status": "HYBRID",
"pdf_src": "TaylorAndFrancis",
"pdf_hash": "a3eaaffeaade7199546a5d8ffd71881f227e6330",
"s2fieldsofstudy": [
"History"
],
"extfieldsofstudy": [
"Philosophy"
]
}
|
256500689
|
pes2o/s2orc
|
v3-fos-license
|
Better Together
Objectives The UK Biobank (UKBB) and German National Cohort (NAKO) are among the largest cohort studies, capturing a wide range of health-related data from the general population, including comprehensive magnetic resonance imaging (MRI) examinations. The purpose of this study was to demonstrate how MRI data from these large-scale studies can be jointly analyzed and to derive comprehensive quantitative image-based phenotypes across the general adult population. Materials and Methods Image-derived features of abdominal organs (volumes of liver, spleen, kidneys, and pancreas; volumes of kidney hilum adipose tissue; and fat fractions of liver and pancreas) were extracted from T1-weighted Dixon MRI data of 17,996 participants of UKBB and NAKO based on quality-controlled deep learning generated organ segmentations. To enable valid cross-study analysis, we first analyzed the data generating process using methods of causal discovery. We subsequently harmonized data from UKBB and NAKO using the ComBat approach for batch effect correction. We finally performed quantile regression on harmonized data across studies providing quantitative models for the variation of image-derived features stratified for sex and dependent on age, height, and weight. Results Data from 8791 UKBB participants (49.9% female; age, 63 ± 7.5 years) and 9205 NAKO participants (49.1% female, age: 51.8 ± 11.4 years) were analyzed. Analysis of the data generating process revealed direct effects of age, sex, height, weight, and the data source (UKBB vs NAKO) on image-derived features. Correction of data source-related effects resulted in markedly improved alignment of image-derived features between UKBB and NAKO. Cross-study analysis on harmonized data revealed comprehensive quantitative models for the phenotypic variation of abdominal organs across the general adult population. Conclusions Cross-study analysis of MRI data from UKBB and NAKO as proposed in this work can be helpful for future joint data analyses across cohorts linking genetic, environmental, and behavioral risk factors to MRI-derived phenotypes and provide reference values for clinical diagnostics.
T he UK Biobank (UKBB) 1 conducted in the United Kingdom and the German National Cohort (NAKO) 2 conducted in Germany are 2 of the largest ongoing population-scale cohort studies. Collecting a wide array of health-related information, including MR imaging data, these studies provide a unique level of individual phenotypic characterization of participants. 3 UKBB enrolls adults between ages 50 and 80 years, whereas NAKO enrolls participants between ages 20 and 70 years. 1,2 This restriction naturally limits the generalizability of study results for each of these single studies.
Merging study data performing cross-study analyses may potentially overcome such limitations and in addition yield higher statistical power, the opportunity to independently replicate results and improve resource efficiency. 4,5 Data compatibility among different studies however poses challenges for proper merging. Recorded parameters and data structures might be substantially different with little overlap. From a statistical point of view, the presence of distribution shifts, or biases, in the observed data due to differences in the data-generating processes can result in data misinterpretation when data from different sources are merged.
Cross-study analyses of imaging data are particularly challenging due to additional sources of variation regarding the image acquisition process such as different scanner types, varying imaging protocols, and study-specific image processing algorithms. These factors can influence image-derived biomarkers, especially when magnetic resonance imaging (MRI) is used-a modality that is inherently difficult to standardize. 6 The practical relevance of such biases has previously been reported on different medical image data sets. 7,8 In the case of UKBB and NAKO, image acquisition protocols are partially aligned with the strategic intention to potentially enable cross-study analyses. Similarities cover an overall agreement on anatomic coverage and partial agreement on MRI sequences. 9 Still, central aspects of MR acquisition protocols vary significantly including scanner models, magnetic field strengths, sequence parameters, 1,2 or the occurrence of artifacts. 10 Thus, it is unclear whether image-derived features from UKBB and NAKO can be pooled in a meaningful way for subsequent combined analyses.
Aiming to overcome such challenges, several techniques for data harmonization across studies have been proposed including model-based approaches (eg, batch effect correction using ComBat 11 ["Combining Batches"] and its modifications 7,[12][13][14]. The advantage of model-based data harmonization is the possibility to selectively correct for undesired bias while preserving informative factors of variation. 7 This has recently been demonstrated also in a medical imaging context, mainly in a neuroimaging and oncological imaging context. 7,12,13 The effective and valid application of such model-based data correction techniques requires detailed understanding of the data generating process. Usually, prior (common sense) knowledge about causal interactions among observed variables is used to harmonize data. As an extension, methods of causal discovery 15 may provide complementary information about the data generating process and thus inform the application of data harmonization techniques. This can be of particular relevance in large-scale studies with complex data interactions. 16 The purpose of this study is to demonstrate how imaging data from large-scale studies such as UKBB and NAKO can be jointly analyzed and to derive comprehensive quantitative image-based organ phenotypes across the general adult population.
Population Characteristics and Imaging Data
Data were obtained from UKBB and NAKO, which obtained written informed consents from all subjects and approved our data analysis. Analysis of anonymized data from these studies was approved by the local institutional ethics committee.
This study reports findings from the first 20,000 data sets including MRI data available to us from the 2 study cohorts (10,000 data sets per study). After exclusion of data samples with MRI acquisition artifacts and erroneous automated organ segmentations (see below), image data and related demographic information (age, sex, body weight, and height) from 17,996 participants (8791 from UKBB and 9205 from NAKO) were used for further analysis. Summary statistics describing the study cohorts are provided in Table 1 and visualized in Figure 1. All image data analyzed in this work have been part of a previously reported technical work on deep learning-based abdominal organ segmentation, 17 which was the technical foundation for this present work. There is no overlap in data analysis or reported results between these 2 studies.
Both UKBB and NAKO acquire whole-body MRI data on a subset of participants using clinical MR scanners (UKBB: 1.5 T Siemens Magnetom Avanto; NAKO: 3 T Siemens Magnetom Skyra, Siemens Healthineers, Erlangen, Germany). In this study, whole-body T1-weighted images obtained from dual-echo gradient echo imaging-which is available in UKBB and NAKO-were used. This includes 4 tissue contrasts per participant and image volume (fat, water, in-phase, and opposed-phase). Although these image contrasts are comparable between the 2 studies, other acquisition parameters vary markedly. Notably, voxel size is higher in UKBB (2.23 Â 2.23 Â 3 mm 3 to 2.23 Â 2.23 Â 4.5 mm 3 ) compared with NAKO (1.2 Â 1.2 Â 3 mm 3 ), which has a direct impact on spatial resolution, image signal, and image noise. 2,9 Extraction of Image-Derived Features This study focuses on the phenotypic characterization of abdominal organs (liver, spleen, left and right kidneys, and pancreas). These target organs were automatically segmented on MRI scans of 10,000 data samples per study using a pretrained and publicly available deep learning model based on a 3D full resolution convolutional architecture (nnUNet 9,18 ). Resulting organ segmentation masks were visually inspected for the purpose of quality control, and data samples with severe MR image artifacts or substantial automated segmentation errors were excluded. This resulted in a total of 17,996 data sets (8791 from UKBB and 9205 from NAKO) that were used for further analysis in this study. This entire process of organ segmentation and quality control is described in detail in previous work 17 and was the technical basis for this work.
In a subsequent postprocessing step, the segmentation masks of the kidneys were split into a parenchymal kidney mask and a kidney hilum adipose tissue (AT) mask by applying a threshold of 0.5 to the relative signal of the fat image (=fat/[fat + water]). Thus, 7 segmentation masks were obtained per data set (5 organs + right and left kidney hilum AT). The corresponding organ and tissue volumes were calculated from these segmentation masks by multiplying the respective voxel count with the voxel volume. In addition to volume features, proton density fat fractions (PDFFs) of liver and pancreas were estimated. To this end, mean fat-image and water-image voxel signal intensities were extracted from liver and the pancreas segmentation masks, and relative fat signal intensities (=fat/[fat + water]) were computed as a measure for the relative organ fat content. 19 Thus, 9 image-derived features were extracted in total (organ volumes, kidney hilum AT volumes, and PDFFs of liver and pancreas).
Analysis of the Data Generating Process
To acquire a comprehensive understanding of the data-generating process-a prerequisite for subsequent data harmonization-we combined prior knowledge with methods of causal discovery. Specifically, we used the knowledge that age was causally dependent on the data source (UKBB vs NAKO) due to different inclusion criteria among these studies. Based on common medical knowledge, we assumed that age and sex have a direct effect on height and weight, and that height has a direct effect on weight. 20 Finally, based on scientific literature, it is well-established that age impacts at least a subset of the observed image features, for example, organ sizes of individuals decrease with age. [21][22][23] Beyond these causal relations established by prior knowledge, we aimed to investigate further potential causal relations among image-derived features, observed demographic features, and the data source. To this end, we used conditional independence testing as a method of causal discovery combined with the knowledge about the direction of potential causal relation. Specifically, we assumed that observed image features are purely children of a parent-child connection in the causal sense, whereas the data source has only a parent role in the causal sense.
To identify the causal graph, we performed nonparametric nonlinear conditional independence testing by Invariant Environment Prediction previously described by Heinze-Deml et al. 24 Concretely, we implemented Invariant Environment Prediction using random forest classifiers/regressors (depending on the type of target variable) that were trained with 100 trees and 5-fold cross-validation. The predictive accuracies on the respective validation sets were statistically compared using nonparametric Wilcoxon testing with a significance value of 0.01 with Holm-Bonferroni correction as previously suggested for Invariant Environment Prediction. 24 The null hypothesis of statistical independence was rejected below this threshold.
Data Harmonization
Before cross-study analysis, we aimed to reduce undesired bias caused by differences in imaging protocols while preserving informative variation due to, for example, age-dependent biological effects. To this end, we used the ComBat technique initially described by Johnson et al. 11 In summary, ComBat achieves batch effect correction by fitting a model to the observed data predicting the features that are to be corrected from the data source (in this case UKBB vs NAKO) and from observed covariates. Subsequently, the contribution of the data source is eliminated obtaining corrected features.
Formally, the value Y ijf of a feature f of a participant j at site i is modeled as: with α f being the feature mean, γ if the site-specific deviation from the mean, b f and k j regression coefficients and input variables of which the (linear) effect should be preserved, and δ if a site-and feature-dependent scaling factor for the residue ε ijf accounting for scaling effects. Harmonized feature values are then computed as: preserving the influence of the input variables k j . As suggested in previous studies, 7,25 we used a quadratic age-term to also account for nonlinear age-dependent feature variation. We applied ComBat for harmonization of image features using the data source (UKBB vs NAKO) as the batch variable (of which the effect should be corrected) and based on the previous analysis of the data generating process using age, sex, height, and weight as covariates (of which the effects should be preserved). For ComBat harmonization, we chose UKBB as the reference data set in this study (ie,γ if ¼ 0 and δ if = 1 for all image features from UKBB).
Cross-Study Analyses
Finally, we merged harmonized data from UKBB and NAKO for subsequent large-scale cross-study analyses. Specifically, we investigated age-dependent changes in extracted imaging features and performed multilinear quantile regression (with an additional quadratic age term accounting for nonlinear effects of age) describing the impact of available demographic parameters on image-derived abdominal phenotypes.
Software
All analyses were performed in Python 3 using the packages Scikit-learn (for random forest implementation, quantile regression, and statistical testing) and neuroCombat (ComBat implementation, https://github.com/Jfortin1/neuroCombat). Graphs were created using the Seaborn package.
Demographic Data
Image data and related demographic information from a total of 17,996 participants (8791 from UKBB and 9205 from NAKO) were included. Notably, due to different prospective inclusion criteria, participants of UKBB were on average significantly older than NAKO participants with peaks between ages 60 and 70 years in UKBB and around the age of 50 years in NAKO (Table 1, Fig. 1). Participant sex was largely balanced in both studies-a result of a balanced participant recruiting process. We observed similarly shaped empirical joint densities of body height and weight in participants from UKBB and NAKO stratified for sex (Fig. 1). Across data sets, a slight age-dependent decrease in height was observable resulting in slightly lower average height of UKBB participants (Fig. 1).
Image-Derived Features
Overall, the observed marginal densities of image-derived features showed varying degrees of deviation between UKBB and NAKO ( Fig. 2A). Organ volumes of liver, spleen, and the kidney showed a FIGURE 3. Causal view on the data generating process. ds indicates data source (UKBB vs NAKO); a, age; s, sex; h, height; w, weight; f, image features; p, imaging protocol; c, unknown confounder. Solid lines represent established causal relations; dashed lines represent possible causal relations. Solid circles represent observed variables. Dashed circles represent unobserved variables. Note that ds and p are interchangeable in this case as each study has exactly one image protocol, which is different from the other study. A, Causal graph of the data generating process based solely on prior knowledge. B, Causal graph based on prior knowledge and with additional results from causal discovery (conditional independence testing). We were able to establish a direct effect of the data source (the imaging protocol) on image features and were able to exclude indirect effects mediated by height or weight through an unknown confounder. However, the existence of an additional, unobserved confounder, beyond the different imaging protocols, cannot be excluded in principle. Cross-Study Analysis of Imaging Data tendency toward higher values in NAKO, whereas measured volumes of kidney hilar AT were slightly higher in UKBB.
Analysis of the Data Generating Process
To further understand these observed feature distribution shifts, we analyzed the data generating process using methods of causal discovery. We were able to use prior knowledge about the causal relation among subsets of observed variables to formulate a partial causal model of the data generating process as a starting point (Fig. 3A).
Further, using nonparametric nonlinear conditional independence testing, 24 we were able to uncover direct causal effects of sex (P < 0.0001), height (P < 0.0001), and weight (P < 0.0001) on observed image features and, importantly, of the image source itself (UKBB vs NAKO, P < 0.0001) on image features. In contrast, no causal effect of the data source could be observed on weight (P = 0.95) or height (P = 0.99) beyond the effect mediated by age (Fig. 3B). These results confirm a direct effect (bias) of the data source (NAKO vs UKBB) on observed image features.
Data Harmonization
Image feature harmonization across studies resulted in a better alignment of empirical marginal feature densities between UKBB and NAKO in a subset of features, particularly for pancreas volume and liver PDFF (Fig. 2B). Interestingly, the above-described distribution shifts between unharmonized features from UKBB and NAKO ( Fig. 2A) were slightly even further increased through harmonization in a subset of image features, most pronounced for pancreas PDFF and right kidney AT volume (Fig. 2B). Clearly, this was a result of preserving and enhancing age-related effects through feature harmonization. As shown for liver and pancreas PDFF in Figure 4, feature harmonization resulted in a markedly improved alignment of age-dependent empirical feature densities between UKBB and NAKO and thus enhanced conspicuity of age-related changes in liver and pancreas PDFF.
In a supplemental analysis (Supplemental Material 1, http://links. lww.com/RLI/A787), we assessed the success of data harmonization by predicting the data source (UKBB vs NAKO) based on image-derived features. The underlying rationale is that, after optimal data harmonization, identification of the data source should not be possible better than by random choice. We found that before data harmonization identification of the data source based on image features was possible to a high degree, whereas after data harmonization this classification accuracy was markedly decreased, pointing to successful harmonization of image-derived features (Supplemental Material 1, http://links.lww.com/RLI/A787).
Cross-Study Analyses
Using merged harmonized data from UKBB and NAKO, we assessed age-related changes of image-derived features over a wider age range (20-80 years) than would have been possible for UKBB (50-80 years) or NAKO (20-70 years) alone.
Overall, we observed a marked, nonlinear decrease in organ volumes with age with the steepest volume decline between ages 40 and 80 years. In contrast, volumes of left and right kidney AT compartments increased substantially with age with the steepest increase between ages 40 and 80 years (Figs. 5, 6).
Liver PDFF and pancreas PDFF both increased nonlinearly with age. This age-dependent increase in organ fat content was more pronounced for the pancreas. Regarding hepatic fat content, a slight age-dependent increase was observed, whereas a subpopulation of individuals with markedly increased hepatic fat content appeared after the age of approximately 40 years (Fig. 5).
Finally, joint analysis of harmonized data from UKBB and NAKO allowed us to generate quantitative models of interactions between epidemiological variables and image-derived features. Using quantile regression, we derived median feature values as well as 25% and 75% quantile feature values as a function of age (including a quadratic age term), weight, and height separately for male and female subpopulations. Interestingly, only the quadratic age term and body weight had nonzero coefficients in the final models ( Table 2, Supplemental Material 2, http:// links.lww.com/RLI/A788). These models provide a unique characterization of the expected phenotypic range of abdominal organ volumes and AT distributions in the investigated populations across a large age range. Beyond age-related changes described previously, these quantitative models revealed a positive effect of body weight on organ volumes and liver and pancreas PDFF of varying degree. Representative examples of abdominal organ phenotypes are shown in Figure 6.
DISCUSSION
In this study, we demonstrated joint, cross-study analysis of imaging data from UKBB and NAKO. We investigated the data generating process and corrected for undesired bias related to the data source. After data harmonization, we performed cross-study analyses characterizing abdominal organ phenotypes in the normal population across a wide age range. To understand data biases, we investigated the data generating process using a combination of prior knowledge and methods of causal discovery. We found that the data source (UKBB vs NAKO) had a direct effect on image-derived features beyond the effects of age, sex, height, and weight. This source-related bias is most likely the result of differences in the image acquisition process between the studies resulting in acquisition shift. 16 Beyond the effects of different imaging protocols, however, it cannot be excluded that unobserved confounders (eg, differences in ethnicity, lifestyle, or nutrition between UKBB and NAKO participants) mediate additional effects of the data source on image features. Overall, we expect these unobserved effects to be far less significant compared with the direct effects of different imaging protocols on image features.
Cross-study analysis of image features revealed how joint analysis of data from different sources enables a more comprehensive understanding of phenotypic variation. We were able to characterize age-related changes of abdominal organ phenotypes in a way that reflects the majority of the adult population in the United Kingdom and Germany. What has been previously reported for small cohorts with a focus on single organs was possible in this study on a large and representative data set thanks to a combination of a unique large-scale data, automated feature extraction using deep learning and cross-study analysis of harmonized data, grounded in causal analysis of the data generating process. We were thus able to provide quantitative models for abdominal organ volumes as well as abdominal AT distribution (liver PDFF, pancreas PDFF, kidney hilum AT volume). This information can potentially be used for defining normative and reference values also in clinical settings with diagnostic utility. To this end, however, the analysis of all data to be acquired in UKBB and NAKO as well as their joint interpretation with outcome data will be required.
The observed ranges of organ volumes in this study are in accordance with existing literature reports. [26][27][28][29][30][31] Similarly, our findings on AT distribution are comparable to previous reports on liver PDFF, 32 pancreas PDFF, 33 and kidney hilum AT. 34 In contrast to these previous studies, the size of the underlying data combined with the wide age range of participants in our study provide a much more comprehensive and general description of parameter distribution.
This study has limitations. Most importantly, feature extraction can be further improved for a subset of features by using dedicated image sequences available in UKBB and NAKO. For example, the analysis of dedicated multiecho sequences for estimation of liver and pancreas PDFF may increase accuracy for these parameters. Furthermore, the addition of further nonimaging data will allow for a more detailed understanding of the data generating process by considering information about, for example, lifestyle, patient history, or genetic predispositions. We will have to leave these analyses to future studies that can be performed once data collection in UKBB and NAKO are completed.
ComBat normalization (and comparable methods), by design, is performed relative to a reference, which can be one of the included data sets or their weighted combination. Without external calibration, the choice of this reference is not well-defined. In this study, we chose UKBB as the reference data set. The rational for this choice was the assumption that particularly signal intensity measurements are more robust and less prone to artifacts on a 1.5 T scanner with larger voxel size due to higher field homogeneity and less noise or ghosting artifacts. To resolve the question of the choice of reference more definitely, additional external calibration measurements (eg, multiecho acquisitions available in UKBB for precise PDFF estimation) will be required in future studies.
In this study, we provided a blueprint of how cross-study analyses can be performed in the context of epidemiological cohort imaging studies and demonstrated the remarkable potential of such analyses.
In conclusion, cross-study analysis of image-derived features from UKBB and NAKO is feasible and can provide unique, population-wide insights into imaging phenotypes and their relation to epidemiological data. Data from UKBB and NAKO harmonized as proposed in this work can be helpful for future joint data analyses across cohorts linking genetic, environmental, and behavioral risk factors to MRI-derived phenotypes and provide reference values for clinical diagnostics.
|
2023-02-03T06:16:44.287Z
|
2022-12-16T00:00:00.000
|
{
"year": 2022,
"sha1": "7e548f715fd76a080259a9294815d4a10b7f52df",
"oa_license": "CCBYNCND",
"oa_url": "https://journals.lww.com/investigativeradiology/Fulltext/9900/Better_Together__Data_Harmonization_and.74.aspx",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "8dc0bbf1e7c3bdbf4036d24791b58c4814a4163e",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
216189030
|
pes2o/s2orc
|
v3-fos-license
|
BUCKLING ANALYSIS OF LAMINATED COMPOSITE PLATES UNDER THE EFFECT OF UNIAXIAL AND BIAXIAL LOADS
This paper investigates the buckling analysis of simply supported symmetrically thin and thick composite plates. Using the Hamilton’s principle, the governing equation for thin and thick composite plates is derived. The equation of motion for thin and thick laminated rectangular plates subjected to in-plane loads is obtained with the help of Hamilton’s principle. The loading conditions of rectangular plate are uniaxial and biaxial compression. Considering the Navier solution technique, closed form solutions are attained and buckling loads are found by solving the eigenvalue problems. In this study, the effect of edge ratios and anisotropy on the buckling analysis of rectangular plate was investigated. The computer programs have been written separately with the help of Mathematica (MATHEMATICA 2017) program for the solution of the buckling analysis of laminated composite plates. Results of the numerical studies for the buckling of laminated composite plates (LCP) are demonstrated and benchmarked with former studies in the literature and ANSYS finite element methods.
INTRODUCTION
Recently, due to the many paramount properties advanced composite materials such as laminated plates are found an application area in the engineering projects. Tremendous researches have been performed on the LCP to clarify the advantages of using these types of materials. One of the focused topics in research subject is the buckling analysis of the composite plates. Reissner theory (1945) is one of the theories which include the shear deformation effect and many researchers have studied on the buckling analysis of LCP by using Reissner theory. Noor (1975) examined the stability and vibration analysis of the composite plates. Qatu used energy function to develop governing equations of LCP. Phan and Reddy (1985) are analyzed of laminated composite plates using a higher-order shear deformation theory. Reddy and Khdeir (1989) investigated buckling and vibration analysis of LCP. Some studies have been performed on characteristics of plates by Qatu (1991Qatu ( -2004 using different plate theories. Dogan et al. (2010) have analyzed the effects of anisotropy and curvature on vibration characteristics of laminated shallow shells using shear deformation theory. Dogan (2012) investigated the effect of dimension on mode-shapes of composite shells. Akavci (2007) presented buckling and free vibration analysis of symmetric and antisymmetric laminated composite plates on an elastic foundation. Akavci et al. (2007) examined buckling and free vibration behavior of LCP on elastic foundation by using first order shear deformation theory (FSDT). Functionally graded plates thermal buckling analysis have been investigated by Akavci (2014) using the theory of hyperbolic shear deformation. Phan and Reddy (1985) analyzed laminated composite plates by using a higher-order shear deformation theory. Setodehand Karami (2004) studied on buckling analysis of laminated composite plates of elastic foundation. Sophy (2013) studied buckling and free vibration of exponentially graded sandwich plates resting on elastic foundations under various boundary conditions. Dogan (2019) investigated buckling analysis of symmetric laminated composite thick plates. Sayyad and Ghuga (2014) presented a study about buckling and free vibration analysis of orthotropic plates by using exponential shear deformation theory.
In this research, buckling analysis of symmetric LCP are investigated using various of number of layers, plate edge ratio and anisotropy ratio. This study might be a pioneer work in terms of laminated composite plates and experimental studies.
EQUATIONS
A lamina is produced with the isotropic homogenous fibers and matrix materials (Fig. 1). Any point on a fiber, and/or on matrix and/or on matrix-fiber interface has crucial effect on the stiffness of the lamina. Due to the big variation on the properties of lamina from point to point, macro-mechanical properties of lamina are determined based on the statistical approach. According to FSDT, the transverse normal does not cease perpendicular to the mid-surface after deformation. It will be assumed that the deformation of the plates is completely determined by the displacement of its middle surface. Using the given equation below (Eq.1) nth layer lamina plate stress-strain relationship can be defined in lamina coordinates (Qatu 2004).
The displacement based on plate theory can be written as where u, v, w, φx and φy are displacements and rotations in x, y, z direction, orderly. uo, vo and wo are mid-plane displacements.
Equation of motion for plate structures can be derived by Hamilton's principle where T is the kinetic energy of the structure in which, qx , qy ,qz ,mx, my are the external forces and moments per unit length, respectively. U is the strain energy and defined as, Solving equation 3 gives set of equations called equations of motion for plate structures. This gives equation 7 in simplified form as, Where the parameter Ks is the shear correction factor. Here, Ks is taken as 5/6. The Navier type solution might be implemented to thick and thin plates. This type solution assumes that the displacement section of the plates can be denoted as sine and cosine trigonometric functions. A plate with shear diaphragm boundaries on all edges is assumed. For simply supported thick plates, boundary conditions can be arranged as follows: Turkish Journal of Engineering (TUJE) Vol. 4, Issue 4, pp. 218-225, October 2020 where [Kmn], stiffness matrices, [N] is buckling load.
NUMERICAL SOLUTIONS AND DISCUSSIONS
In current research, buckling analyses of symmetric LCP are investigated. Navier solution procedure for buckling analysis of LCP is obtained. The computer programs have been prepared using Mathematica program separately for the solution of the buckling analysis of LCP. The results were compared with the semi-analytical method and the ANSYS finite element software and previous studies in the literature. The effects of the E1/E2, and a/b ratio are also investigated.
In numerical calculations, the material and geometrical properties are defined as: a = 1m (a/b = 1, 2; a/h = 10, ρ = 2000 kg/m 3 , E1 = 40×10 3 MPa (E1/E2 =3, 10, 20, 30, 40), G12/E2 = G13/E2 = 0.6, G23/E2 = 0.5, υ = 0.25. In the analysis, following parameter is studied for non-dimensional buckling load as; It can be seen from Tables 1 that the non-dimensional buckling load factors increase when the ratio of E1/E2 change from 3 to 40 (Fig. 3). The non-dimensional buckling load factors decrease when the ratio of a/b change from 1 to 2 for Table 2 and Fig 4-5. Also, the nondimensional buckling load factors obtained for present study seem to be compatible with other study. Buckling analysis results showed that when the number of layer increases, the non-dimensional buckling load factors obtained by present study increase as well ( Fig. 6-7). Turkish Journal of Engineering (TUJE) Vol. 4, Issue 4, pp. 218-225, October 2020 223 Fig. 6. Effect of edge ratio on the uniaxial and biaxal buckling load factor of various laminated sequence.
CONCLUSION
In this study, buckling analysis of symmetrically (LCP) resting is theoretically investigated. By applying Hamilton's principle, the governing equation for thick LCP is obtained. The solutions are gathered by using Navier solution method. The effects of a/b, E1/E2 ratios on buckling loads are examined. The most important observations and results are summarized as follows: • The non-dimensional buckling load factors obtained for present study seem to be compatible with other study. • The a/b, E1/E2 ratios, are playing a crucial role on buckling loads. • The non-dimensional buckling load factors generally decrease when the ratio of a/b change from 1 to 2. • The non-dimensional buckling load factors increase when the ratio of E1/E2 change from 3 to 40. • The number of layer increases, the nondimensional buckling load factors obtained by present study increase as well.
|
2020-04-02T09:31:09.175Z
|
2020-10-01T00:00:00.000
|
{
"year": 2020,
"sha1": "23a7c7a0c643e5bbaa9236e7b87f74dd78a49dac",
"oa_license": null,
"oa_url": "https://dergipark.org.tr/en/download/article-file/1032872",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "d1596cbafdbc8a52795185b870e27a34250e3486",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Materials Science"
]
}
|
55422251
|
pes2o/s2orc
|
v3-fos-license
|
Cina Benteng: the Latest Generations and Acculturation
The aim of this paper was to investigate the acculturation process encountered by the two latest generations of Cina Benteng. A Skype interview was conducted with two young Cina Benteng descents. The analysis was also supported by insightful remark from the parents of the two interviewees. This study discovers that the two generations seem to respond to the acculturation process in different ways. However, although some traditions are no longer relevant to the later generation, their identity as a Chinese descent cannot be easily removed.
INTRODUCTION
The acculturation process varies through different generations. Earlier migrants might have fond and deep nostalgia of their origins. Over time, the nostalgia might be fading. The descendants, those who are born in the new place, might find itirrelevant to talk about the collective memory of their past origin. Some are remembered; some are let forgotten.
This paper intends to investigate to what extent thetwo latest generations of Cina Benteng (translated: 'Chinese of the Fort', one of the Chinese Indonesian ethnic groups) encounter acculturation process. Some research has been conducted to examine the early generations of Cina Benteng, yet there has not been much study revealing the latest generations. The data were collected from a Skype interview with two young Cina Benteng descents born and brought up in Tangerang, the main base of Cina Benteng community.
The history of ethnic Cina Benteng and a brief portrayal of the earlier generations will be provided, followed by the description of the informants' experience and their parents' life story. The experience and the life story will be linked to Berry (1990). Boski (2008) will then sharpen the analysis. In addition, the discussion in this paper will be supported by personal insight and points of view as a Cina Benteng descent. Thus, the writer realises that the analysis might not represent the whole idea of Cina Benteng ethnic group.
In the end, a brief summary and some in-depth questions of the future of ethnic Cina Benteng will conclude the study.
Theories in psychology of acculturation
The way individuals deal with acculturation might differ than how the groups do in a bigger scale. There are two levels of this process that make a division between collective and individual acculturation (Berry, 1990). The population level, which includes ecological, cultural, institutional and social factors, might stimulate a transformation in social, economic and political structure. The individual level, on the other hand, reflects changes in one's behaviour, values, identity and attitudes. These changes might be influenced by interaction with another culture or individuals' participation in collective transition. The changes that the individuals experience are then described as psychological acculturation. Berry (1990) also noted that acculturation process includes several key elements. The contact or interaction between two cultures results in changes both in aspects of culture and psychology of the people involved. These changes tend to be passed down to the next generation. Furthermore, dynamic activities are involved before and after the contact. Individuals characterise certain aspects of life such as their preferred education, media, religion, education, politics, daily practices and social interactions. It then defines a relatively stable way of living in the acculturative place.
Furthermore, Berry (1990) defined four varieties of acculturation based on the orientation to maintain cultural identity and characteristics and the importance of sustaining relationship with other groups. According to (Berry, 1990), assimilation occurs when the acculturating group or person does not wish to uphold his/her identity. In contrast, when the original culture is cultivated and interaction with other groups is avoided, the idea of separation is defined. When maintaining original culture and interacting with other groups are seen as important, integration takes place. Last, when there is not much interest in maintaining original culture and interaction with other groups, marginalization occurs. Responding to Berry's, Boski (2008) argued that integration is the most desirable option among the acculturating individuals. He defined five levels of integration. In level 1 (acculturation attitudes), individuals feel comfortable living in two cultural worlds. S/he acquires the needs of being fluent in the languages of both cultures and contact with both groups. Level 2 (perception and evaluation) defines a merging point of the two cultures, in which 'similarity'or 'third value' is viewed. In level 3 (functional specialisation), individuals (usually a family) are able to develop separation at home, while in public domains they manage to apply assimilation. When the individuals are able to become a bicultural person, level 4 is defined. Finally in level 5, noted down from Bennett (1993) and Bennett and Bennett (2004), integration is equalized with marginalisation, where individuals find themselves in a pluralistic world and do not belong to any cultures. This could relate to the idea of cosmopolitanism, in which everyone is a citizen of the world
Ethnic Chinese in Indonesia from time to time
Ethnic Chinese has existed in Indonesia for a very long time. During the Dutch colonialism, as noted in Coppel (1999;2001), ethnic Chinese was classified as "foreign oriental". Overtime, there has been a strong distinction between Chinese Indonesians and indigenous or local Indonesians. Budiman (2005) noted that during the Soekarno (first president of Republic of Indonesia, 1945Indonesia, -1965 era, ethnic Chinese in Indonesiawere viewed in two different positions. On the one hand, Consultative Body for Indonesian Citizenship proposed that Chinese must be allowed to maintain their culture and seen as a part of Indonesia's rich ethnic diversity. This (also known as the left) was led by Soekarno and supported by the Communist Party. On the other hand, Institute for Development of National Unity seemed to lean more to assimilation, emphasising that ethnic Chinese had to give up their culture and fully adjust to Indonesian culture, values and tradition. This was shared by the Indonesian military and Muslim groups.
After the 1965 conflict against the Communist Party led by the Indonesian military,the new regime (known as the New Order) led by Soeharto was forcing ethnic Chinese in Indonesia to completely assimilate. The term 'Cina', which referred to Chinese Indonesians and sounded more insulting, became official to replace 'Tiongkok' or 'Tionghoa'. Chinese Indonesians were obliged to have an Indonesian name (ibid). Chinese culture, including the use of Chinese characters in public and Chinese festivals, was banned. The separation between 'Cina' and 'pribumi' (common term for local Indonesians) was obvious. Many Chinese Indonesians have been known wealthier and more successful than locals. A lot of them own a business. It has created a big social gap. Things had not really changed until 1998. When Indonesia was facing a monetary crisis in 1998, a bigger conflict between 'Cina' and 'pribumi' burst in some big cities. Shops and houses owned by Chinese Indonesians were robbed and burnt. A great number of Chinese Indonesians (believed to be over a thousand) were killed; the womenwere raped. Chinese Indonesians were forced to hide their identity.
The clash brought both positive and negative impacts. Indonesians' fourth president, the late KH Abdurrahman Wahid (well-known as Gus Dur) dismissed the law that discriminated Chinese Indonesians. In 2002, the next president, Megawati Sukarnoputri,declared Chinese New Year as a national holiday. Chinese cultural symbols and traditions, such as 'liong' and 'barongsai' (dragon and lion dance) were showed in many public events. However, although Chinese Indonesians are becoming more accepted nowadays, many of them are still attempting to find what so-called 'a place' in Indonesian society.
History of Cina Benteng
Ethnic Chinese are spread out in some cities in Indonesia. The latest population census in 2010 notes that there are approximately 2.8 million Chinese Indonesians (Franciska, 2014).The most well-known ones are Cina Medan (based in Medan, North Sumatera), Cina Bangka (in Bangka), Cina Jawa (in Semarang and Surabaya) and Cina Singkawang (West Kalimantan). There are also some smaller Chinese groups such as Cina Benteng, based in Tangerang (a greater area of Jakarta), Banten province.
Unlike other Chinese Indonesian ethnic groups, which are usually associated with great wealth and fortune, Cina Benteng tend to be looked down in terms of their social status. News on TV or newspaper articles often discus their low economic status and underprivilegedliving condition. Their skin is not as light as other Chinese Indonesians. It is rather darker as 'pribumi' are. Yet, their eyes are slanted, a typical characteristic of Chinese. Arif (2014) noted that it has been the sixth or seventh generation of Cina Benteng since they first arrived in Indonesia. Due to the Dutch massacre of Chinese population in Batavia (now called Jakarta) in 1740, Chinese people escaped to Tangerang and Bekasi. They started to live in some outskirt areas of Tangerang such as Kedaung, Kampung Melayu, and Teluk Naga (Arif, 2014). The word benteng (translated: fort) refers to the fortress along the Cisadane River, Tangerang, built by the Dutch during colonialism. The Chinese who moved from Jakarta then lived in the area that used to be the fortress. From this occasion, the term Cina Benteng (Chinese of the Fort) was created. The Chinese then started to assimilate with the local culture. They married to the local women. Some of them even converted to Moslem and refused to eat pork (Sugianta et al., 2012;Arif, 2014).
In the 1900s, Cina Benteng gave significant contribution to the Dutch colonialism in Tangerang. It then created anger from the pribumi and an ethnic clash between the pribumi and Cina Benteng occurred in 1946 (Arif, 2014). The houses of Cina Benteng people were looted. Those who survived there or came back later no longer owned property. Some of them lived in a brick or bamboo house along the Cisadane River. It then defined their social status, the main thing that distinguishes them from the other ethnic Chinese in Indonesia. Ethnic Cina Benteng has displayed two different characteristics of their own. On the one hand, they still hold their 'circle of life' tradition such as the wedding and funeral. They still celebrate the big days of Chinese (Imlek -Chinese New Year, Cap Go Meh -the 15th day of the new year and Peh Cun -boat race festival).
They also have a special part in their house dedicated to their ancestors (ibid). On the other hand, ethnic Cina Benteng has successfully assimilated with the pribumi, especially ethnic Sunda (West Java) and Betawi (Jakarta). It can be seen from the modification of their traditions.
In their traditional wedding, the groom wear typical Chinese clothes while the bride wear clothes from ethnic Betawi. The music played at the wedding, called gambang kromong, is derived from coastal Sundanese music. Many Cina Benteng people do not speak Chinese. They have their own dialect or vocabulary; some of them are combination of rough Chinese, Indonesian and Sundanese or Betawi (Nurafni, 2012).
Elder generations of Cina Benteng may have firm knowledge of their traditions. Over time, the young generations encounter a different way of acculturation. The data taken from the interviews in this study will investigate how individuals from two generations of Cina Benteng adjust their lives to local settings in Indonesia. The variables mentioned in Berry's (1990) study such as education, media, political participation, religion, language, daily and social practices will be the foundation of the analysis.
METHODS
In order to get a better sense of the acculturation process of the current generations of CinaBenteng, qualitative research was conducted in this study. Lodico, Spaulding, and Voegtle (2006:264) explained that qualitative research is "the study of social phenomena and on giving voice to the feelings and perceptions of the participants under study". Qualitative research is best used in this study to gain information of how the latest generations of Cina Benteng have encountered a different notion of acculturation. Furthermore, the phenomenology (Qualitative Research, n.d.) is used as an approach to gather sufficient data. This study aims to gain access to the life-world and experiences of the latest generations of Cina Benteng.
I did a Skype interview with two young Cina Benteng descents (a 19 year old female and 22 year old male). I have direct contact with these two informants. The interview was done in both English and Indonesian language. I then requested them to interview their parents and record the conversation (in Indonesian language). Due to the distance and time difference between I, as the researcher, and the informants, a Skype interviewed was conducted instead of face-to-face interview. One of the families lives in the area of Cina Benteng community in the city of Tangerang, while the other one lives outside the Cina Benteng community. Questions in the interview are based on Berry's (1990) theory of psychological acculturation and five meanings of integration proposed by Boski (2008).Questions include the participants' activities and hobbies, what they know about Cina Benteng and their ancestors, what they remember from their childhood, kind of school they have attended, language spoken at home and some daily practices such as food served at home. Although the respondents did not mind their real names to be published, for the matter of confidentiality the respondents were renamed.
Another dimension of acculturation attitude and the third value
The first informant is Tanti, a 19 year-old girl studying Accounting. She is among other fortunate Cina Benteng descents in her generation, whose family is able to send them to one of the popular universities in Jakarta. Earlier generations, looking at my own family, tend to underestimate education. After high school, they mostly worked in the family business, or if less unfortunate, worked in the marketing (as a sales person). In the past, from my personal view and observation as a Cina Benteng descent, starting a new family seemed to be more important than pursuing higher education. Men went to several houses of their relatives to find a woman to marry. Nowadays, Cina Benteng descents can find their wife or husband from their work or education place. However, recently there is a tendency that Cina Benteng descents prefer to marry other Chinese descents. In the Skype interview, Tanti noted that she does not mind making friends with people from other ethnic groups. Yet, she explained that none of her siblings is married to non-Chinese. Referring to Boski's (2008) acculturation attitudes, Tanti's decision to make friends with any ethnic groups reflects the notion of integration, in which the acculturating individuals feel the need of interaction with both groups. In contrary, when it comes to a desirable marriage, they went back to separation level, where they prefer people from the same ethnic group. There seems to be a desire or need to maintain their cultural identity or characteristics as Chinese descents. It is contradicting the history of Cina Benteng. Ethnic Cina Benteng was established by assimilation of Chinese migrants who married local Indonesians.
Tanti and her family have lived in several places. Until ten years old she lived in Kapling, one of the important areas for Cina Benteng community. Now they live in a costly housing area in Tangerang. They practice Buddhism. When talking about religion practices, she notices that some other Cina Benteng people now believe in other religions such as Catholic and Christianity. It should also be noted that she and her father went to a private Catholic school when they were a child. Here we can see another dimension of acculturating attitude. Students of private/Christian/Catholic schools are mostly Chinese descents, while of state schools are generally locals and Moslem Indonesians. Although assimilation has long been applied in Cina Benteng community, some modern Cina Benteng families unconsciously or subconsciously seem to encourage their children to spend time with other Chinese. In this case, ethnicity might matter more.
Tanti's father, Hendri (53 yo), highlighted some differences of the past and the recent Cina Benteng traditions. In nowadays wedding, the bride and the groom wear western wedding dress. Wedding used to be held in the house of the couple; nowadays it is held in a wedding hall. During the interview, Hendri was able to tell each rituals of the wedding, while Tantifailed to name them. It is no longer about acculturation process of the local and acculturating group; global culture might intervene. Furthermore, Hendri noted that early generations of Cina Benteng have invented "their own cultures". The third value or merging point has been established. It may show that the CinaBenteng have a desire to acquire two identities, that they belong to both Chinese and local (Indonesian) culture. Chinese language has long disappeared in Cina Benteng families. Instead, Cina Benteng created specific vocabulary which is close yet not similar to regular Indonesian language ('ambek' instead of 'marah' / upset, 'beberes' instead of 'beres-beres' / to tidy up, 'aleman' instead of 'manja' / spoiled, etc) (Kamus Bahasa Orang Cina Benteng, 2010). The food served in CinaBenteng families also reflects Boski's (2008) third value/merging point. Kecap Benteng SH (soy bean sauce SH) is one of the main ingredients of Cina Benteng homemade dishes. SH is the initial of the founder, Siong Hin, a Cina Benteng descent.
Beside the language, myth also seems to fade away in the last generation of Cina Benteng. At some point in the interview, Tanti noted that her mother still reminds her not to cut her nails in the evening/night. It is believed to bring evil spirit. However, Tanti does not practice such beliefs anymore. In the end, when asked how she feels towards her ethnic group, Tanti comes to the idea of marginalisation. Being with people from other ethnic Chinese, she does not identify herself as one. She is completely aware that Cina Benteng is different from other ethnic Chinese. She finds no problems making friends with local Indonesians although being among other Indonesians would sometimes make her also feel 'different'. Home, as she explains, is where she meets other Cina Benteng people in Pasar Lama (a market dominated by Cina Benteng) or her childhood memory where she played with her Cina Benteng neighbour friends.
Childhood memory as the foundation of ethnic identity
For the second informant and his father, Cisadane River and their childhood have greatly influenced their daily or social practices. Anjas, 22 yo, spends most of his spare time with friends from his childhood at the 'klenteng'. Although he would not mind having other friends (as he does in his school and workplace), he feels more comfortable surrounded by people he has spent time with since they were a kid. He and his father, Heri, were born and raised in Kapling, near Cisadane River. Heri recalled his childhood memory when he swam in the river with friends. It again makes Cina Benteng different from other Chinese groups. Due to the higher social status owned by other ethnic Chinese, the children might not be willing to play in places like a river. Heri also mentioned some traditions which are usually held at Cisadane River, such as boat races on day 15th of the new year. Heri and Anjas still actively participate in such events every year. That contributes in shaping their identity as Cina Benteng. Heri interacts with people of any kinds. With his Indonesian (pribumi) friends he speaks Sundanese, one of the local dialects. Since he and his ancestors do not speak Chinese, Indonesian language is used when talking with other Chinese-Indonesians.
Heri owns a Chinese name, while Anjas does not. The latest generation of Cina Benteng does not have a Chinese name anymore. Like my father, his siblings and the earlier generations, Heri serves different function in different settings. At home he and family call each other in Chinese name, while with non-Chinese friends outside he is known as Heri. At home he is a Cina Benteng descent; outside he might fully mingle with Indonesians. Yet, those childhood memories at Cisadane River are strongly attached to his identity.
CONCLUSION
Ethnic Cina Benteng has encountered several levels of acculturation process. From losing the original language to create a new one, different generations respond to the acculturation in different ways. Although Chinese names are no longer relevant in their latest generation, identity as Chinese descents cannot easily be removed.
The fact that ethnic Cina Benteng might have assimilated with local Indonesians in the past and their low economic status also raise questions. If other Chinese groups are unwilling to assimilate, yet they have a better life (than ethnic Cina Benteng which has higher degree of assimilation) in the acculturating place, is assimilation still important? Into what extent should the acculturating individuals merge with the dominant group? If now some Cina Benteng descents prefer to marry other ethnic Chinese (not local Indonesians as their ancestors did), will it bring them back to the origins?
For Heri, my father and other Cina Benteng descents who were born and spent their childhood by the Cisadane River, the memory of swimming and practicing the traditions at the riverside might help keeping their identity as Cina Benteng.
|
2018-12-11T08:50:50.185Z
|
2015-05-30T00:00:00.000
|
{
"year": 2015,
"sha1": "c7ea35df6db59f57889dc7dab5873d611e7737b7",
"oa_license": "CCBYSA",
"oa_url": "http://journal.binus.ac.id/index.php/Lingua/article/download/759/736",
"oa_status": "GOLD",
"pdf_src": "Neliti",
"pdf_hash": "0fc1ccc967544bec39092cdc63630e8f9b9aa85b",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Psychology"
]
}
|
253663107
|
pes2o/s2orc
|
v3-fos-license
|
Tara bandu: On The hybridizaTiOn Of a sign
: Tara bandu is a traditional ceremony in Timor-Leste that enshrines a customary law with o ffi cial recognition since independence, which generally applies to the spatial scale of the smallest administrative division of the territory ( suco ) and several years of timespan, rooting in tradition ( lisan ) concerning natural resources management and also relations among people. Th ere is evidence related to the concepts of adat (tradition in Indonesia) and pemali (taboo) in Southeast Asia and Austranesia, suggesting that precursors of tara bandu should exist before the Portuguese arrival in the early XVI century. Yet, there was a subsequent diachronic process of hybridization of static iconic devices and other traditional Timorese practices with the vocalized Portuguese colonial bandos , evolving to a choreogra-phic ritual with several semiotic dimensions: the sacri fi cial animist performance addressed to the ancestor’s spirits and a supernatural environment ( lulik ), dancing and singing and other artistic traits, including Catholic rites, then focusing on signing written documents endorsing commitments. Th e main objective of this paper is proposing a semiotic characterization of the hybridization processes leading to current tara bandu ceremonies using Peirce’s typology, rooting in the static and iconic device named kero ( sensu Forbes) herein discussed. Contemporaneously, tara bandu is a salient event anchoring communities in de fi ning participatory land use plans including agreements on property boundaries, rules of engagement and also interdictions and sanctions. Tara bandu is mentioned nowadays as an example and case-study of bottom-up strategies for environmental peacebuilding processes.
Unlike the laws of physics, which are free of inconsistencies, every man-made order is packed with internal contradictions. Cultures are constantly trying to reconcile these contradictions, and this process fuels change. (Harari, 2015, p. 182) introduCtion This paper focuses on tara bandu, a traditional ceremony present in Timor-Leste, a former Portuguese colony until 1975, now celebrating twenty years of independence after more than two decades of Indonesia occupation. We will be dealing with the hypothesis that the current ritual concept, design and performance, embodies a process of diachronic hybridization between native ancient practices and the Portuguese colonial bandos. management whereby communities swear under a sacred oath, often accompanied by animal sacrifice, not to eat particular foods or cut down specific plants or trees (Scambary & Wassel, 2018), which could also be broadly interpreted as a practice regulating a range of place-based social and environmental relationships (Palmer, 2016). A very recent and wide overview in anthropological studies carried out in Timor-Leste after independence is available and systematized according to different schools of thought and universities (Fidalgo-Castro, 2022).
The customary law tara bandu is recognized in Lei de Bases do Ambiente (Decree-Law No. 26/2012 of the 4 th of July), Article 1 (Definitions) therein stating: [Tara bandu] it is a custom that is part of the culture of Timor-Leste that regulates the relationship between humans and the environment.'And, in Article 8, specifically: The state recognizes the importance of all types of Tara Bandu as a custom that is part of the culture of Timor-Leste and as a traditional regulatory mechanism for relationship between humans and the environment around them 3 .
The lulik concept -considered a local term correlated with taboo and referred to what should be considered sacred, holy, forbidden and dangerousis central to building social contracts between the Timorese, demanding that nature must be respected, and applies to sacred places and persons who perform the rituals and also to sacred trees that cannot be cut without asking permission (Guterres, 2014, p. 13). In Timor-Leste, and before that in Portuguese Timor, leadership and authority involved not only the paramount ruler (e.g. dato in Tokodede, liurai in Tetum) renamed rei and régulo in Portuguese language, but also ritual speakers (lia na' in: the owner/custodian of the words) and the guardians of sacred objects (dato lulik) among other (Kamen, 2015, p. 38).
Tara bandu, as it occurs nowadays, can be interpreted as a hybrid semiotic framework expressed as a choreographic argument (Casquilho & Martins, 2021) which originated from the local Timorese animist and regulatory practices concerning natural resources management associated with spirit ecologies (e. g. Palmer & McWilliam, 2019) and spiritual landscapes (Bovensiepen, 2009), anchored in the lulik concept and respect for the spirits of ancestors, but also embodying the Portuguese colonial bandos and Catholic rites, resulting in an original ritual performance with several dimensions and a specific binding effectiveness among those involved.
The main traits of this paper are: reviewing a brief account relative to the roots of tradition in Timor-Leste (lisan) and neighboring Indonesia (adat), including some relevant notes on the Portuguese presence and influence; sketching a semiotic characterization of the hybridization processes leading to current tara bandu ceremonies, using Peirce's typology; and mentioning the relevance of tara bandu as a binding procedure relative to the stakeholders, consecrating a customary law. This paper intends to contribute to the discussion of considering a future categorization of tara bandu as a case of intangible heritage of the lusophone (Portuguese-speaking) world.
Some HiStoriC And CuLturAL noteS concerning timor-leste
Yuval Harari (2015, p. 54) mentions that, long predating the Agricultural Revolution, the first permanent settlements in history might have appeared on the coasts of Indonesian islands as early as 45,000 years ago.
Recent archeological findings report two new engraving sites from the Tutuala region of Timor-Leste comprising mostly humanoid forms carved into speleothem columns in rock-shelters, considered dating from the terminal Pleistocene and early Holocene in southeastern Wallacea (O'Connor et al, 2021), while in another text is mentioned the discovery of at least 16 hand stencil motifs in Lene Hara Cave, where evidence of human occupation is estimated dating from ~43,000 cal BP, considered consistent with the pattern found in neighboring regions of Island Southeast Asia and Australia, and recognized as part of Pleistocene painting traditions (Standish et al, 2020).
Also, at Laili Cave, in northern Timor-Leste, which is said to preserve the oldest human occupation trace elements in this insular region -earlier than the other early Pleistocene sites known in Wallacea with a sequence spanning 11,200 to 44,600 cal. BP -accordingly to Hawkins et al (2017), the vestiges revealed variability in subsistence strategies over time, which appears to be a response to changing landscapes and concomitant local resources.
James Fox (2011) points out that the terms for the kin categories that are used across Timor for the first ascending consanguineal generation (father, father´s brother, mother's brother, mother, mother's sister, etc.), links the Austronesian social formations found on Timor to earlier forms of Austronesian social organization that are still present in Taiwan and western Austronesia. Regarding eastern Indonesia, especially Flores island, Timor, and the islands of Maluku, it was considered that this area preserved elements of the oldest forms of Indonesian society (Fox, 2004a), particularly in various encompassing systems of marriage exchange and in the reliance on complex dual cosmologies.
In Timor-Leste, as elsewhere in Austronesia, like Meitzner-Yoder and Joireman (2019) pointed out, landscape knowledge is connected to emplaced spiritual entities and this relationship enacts the strength and durability of land claims in customary land systems involving precedence, and this link is conceptualized as a concatenation of ties and relationships defined by reference to their proximity to a common point or origin (Barnes, 2011). The knua, considered an original settlement of a tribe or clan is also a main reference (e.g. Paulino, 2012).
Animism is considered to be the belief that places, animals, plants or other natural phenomenon has awareness and feelings, and so animists believe that there is no barrier between humans and other beings and they can communicate through speech, song, dance and ceremony (Harari, 2015, p. 61). Already Freud (1918, p. 128) had mentioned that animism is a system of thought and that the human race has developed, over the ages, three great representations of the universe: animist, religious and scientific, and that the first, not yet being a religion, contains its foundations.
When mentioning the Fataluku tradition, McWilliam (2011) reports that largely invisible spirit agents take on a variety of forms: they include the powerful mua ocawa (in Tetum: rai na' in) spirit owners of place; the chat chatu nature spirits that inhabit trees, springs and the sea; the bloodied and vengeful souls of those who have died bad deaths (ula papan/ula ucan); as well as the elusive and feared shape-shifting witches (acaré) who can entice and consume the unwary.
The notion of rai na' in in Tetum (or, for instance, rea netana in Naueti language) is understood as 'source of the land' or 'master of the land', and exists in varying forms throughout Timor-Leste and the Austranesian cultural sphere, holding rights to the allocation and apportionment of land and natural resources (Trindade & Barnes, 2018): for example, Daralari claims to emplaced authority are based on narratives of origin, sometimes associated to tempu rai-diak (tranquil time) referred to an idealized past and the existence of a stable social order regulated by the rules of ukun (authority) and bandu (forbidden).
Regarding Southeast Asia, one can read the following: Despite its probable Arabic origin, the term adat resonates deeply throughout the Malay-Indonesian archipelago. Often defined as 'custom' or 'customary law', the word refers, broadly speaking, to the customary norms, rules, interdictions, and injunctions that guide an individual's conduct as a member of the community and the sanctions and forms of redress by which these norms and rules are upheld (Sather, 2004, p. 123).
Ritual ceremonies and traditional authorities occur in other islands of Indonesia, albeit with some specific characteristics: in Savu (Sabu), a small island located between Sumba and Timor, the two highest-ranked priests at Seba are named the deo rai "lord of the Earth", and apu lodo, "descendant of the Sun" (Fox, 2004b), and the word rai applies for a territorial domain like the same word in Tetum.
LuLiK
The core of lulik concept concerns regulating human relations with divinity through the intermediation of nature and invocation of the spirits of the ancestors (Araújo, 2016). Trindade (2016) considers that the lulik refers to the spiritual cosmos that contains the divine creature -Maromak in Tetum, meaning bright, luminous, and considered originally a female concept -and the spirit of the ancestors together with the spiritual root of life, including the rules and regulations that dictate the relationships between people, and among them and nature. McWilliam et al. (2014) pointed out that the meaning spectrum of lulik goes far beyond the usual concept of 'sacred and prohibited', therefore mentioning, from an outsider's perspective, that lulik and its equivalents in other local living languages in Timor-Leste -for instance, tei in Fataluku, po in Bunak, falún in Makassae, luli in Kemak and Naueti -refers to a whole range of objects, places, topographic features, categories of food, types of people, forms of knowledge, behavioral practices, architectural structures and periods of time. Also, regarding the Oecussi enclave dominated by the Meto ethnic group, Meitzner-Yoder (2011) mentions the term nuni as equivalent to 'taboo', or lulik, which sometimes serves as a mnemonic device for the history of family migration. In Oecusse, Usi-neno is the designation for the divine being, equivalent to Maromak in Tetum.
remArKS on tHe portugueSe preSenCe And infLuenCe in timor
Ancient chroniclers of the XVI century, for instance the Portuguese navigator Duarte Barbosa, writing around 1516, said that the merchants 4 : (…) sail from this city of Malacca to all the islands that are all over this sea, and to Timor, from where they bring all the white sandalwood, which among the Moors is very esteemed and very valuable (Barbosa, 1966, p. 203).
The map elaborated by the Portuguese navigator and cartographer Francisco Rodrigues who participated in the first voyage to the Moluccas, dates back to ca. 1512, though the sketch was presumably based in Javanese cartographic information (Leitão, 1948, p. 51), and is labeled saying "the island of Timor where sandalwood is born" (e.g. Casquilho, 2014). Ptak (1983), refers to the existence of earlier Chinese mentions to the island Timor, dated ca. 1250 and 1345, there being named as Ti-wu or Ti-men and, in the treatise named Tao-i-Chi-Lueh ca. 1350, a Chinese chronicler reported that in the mountains of the island grow no other trees but sandalwood, which was very abundant and the wood being traded in exchange for silver, iron, porcelain and fabrics. Also, Timor is mentioned in a Javanese poem dated from 1366 (Hägerdal, 2012, p. 15).
The chronicler of the circumnavigation of Magalhães and Elcano, Antonio Pigafetta wrote that, having arrived in Timor in 1522, he saw a boat from the Philippines (Luzon) carrying sandalwood; he also drew a sketch of the island albeit with an atypical triangular shape -as shown in Figure 1 -depicting several places, some of which we still can recognize the names. Pigafetta also reported that the white sandalwood is found in that island and nowhere else, and pointed out: (…) when they go to cut the sandalwood, the devil (according to what we were told), appears to them in various forms and tells them that if they need anything they should ask him for it. They become ill for some days as a result of that apparition. (Pigafetta, 1906, pp. 166,167) One could presume that the term "devil" is a European biased misconception of a spiritual reference, linked to the lulik concept.
The Portuguese presence in Lifau was stabilized since 1652/53 (Tavares, 2019, p. 24). With regard to the emergence of the Catholic religion in Timor, there are some salient and prior events: in 1556, friar António Taveira had converted hundreds of Timorese (e. g. Leitão, 1948, pp.11,12) and, almost a century later, succeeding to an incursion of a king of Makassar who retained about 4,000 Timorese captives for slave trading, a reaction induced the baptism of the queen of Mena in June 1641, followed by the king of Lifau and hundreds of people (Morais, 1944, p. 111). Yet, those prior events didn't entail a global adherence to Catholicism, since by the end of the Portuguese colonization it is estimated less than 30% of the population followed that religion.
Narrating from a Eurocentric perspective, Artur Teodoro de Matos (1974, pp. 34-36) wrote that the Timorese traditional religious beliefs system consisted of a set of superstitions, based on a mixture of fear and adoration for the spirit of the dead, materialized by stones, birds, animals and even streams of water or objects endowed with mysterious magical, beneficial or evil power, which they call lulik, what means sacred and intangible; also, the author reports that the rites of the traditional animist religion were designated estilos 6 and consisted essentially of animal sacrifices accompanied by certain prayers, and were made in all serious occasions of existence, for example whenever the Timorese entered into a pact of friendship and mutual help with the purpose of joining forces to fight a common enemy.
Alfred Russel Wallace, the eminent naturalist, who stayed at Díli -the capital of the Portuguese colony since 1769 -and its surroundings for about four months in 1861, in his essay concerning the Malay Archipelago, identified the traditional cultural practices he had observed naming them pemali 7 and equating with the concept of taboo prevailing in the islands of the Pacific (Wallace, 1890, p. 149). Also, in the late nineteenth century, the Portuguese governor Affonso de Castro (1867, pp. 315-317) links pemali with the sacred house -uma-lulik or uma-lisan in Tetum -mentioning that Timorese estilus are the set of rules established by tradition and observed by communities of the island. The Tetum word estilu, adapted from the Portuguese word estilo (meaning style), is referred to traditional ceremonies incorporating a ritual sacrifice, associated with the concepts of lisan and ukur relative to tradition and traditional practices (Costa, 2000, p. 230), what can be correlated with the term adat usual in neighboring Indonesian islands, like in the island Flores, associated with diarchic concepts of governance and dual cosmologies (e.g. Viola, 2013, p. 16). In addition, Boarccaech (2020) mentions that lisan has a wide and fluid spectrum meaning, being applied to places and objects and for identifying the sacred and the profane, to differentiate people and their families and, at the same time, to refer to the extended family and what connects people to each other in the same group.
toWArdS A SemiotiC CHArACterizAtion of tArA BAndu
Tara bandu is an expression in Tetum language -considered lingua franca and co-official, among about fifteen to twenty local languages in Timor-Leste (e. g. Taylor-Leech, 2009) -which means literally "hang the prohibition": tara means hanging, normally from a rope, and bandu signifies prohibition (Costa, 2000, pp. 49, 311).
Meitzner-Yoder (2007b) helps clarifying the subject with a focus on Oecussi enclave terminology: tara bandu is initiated by a ritual involving spoken prohibitions, animal sacrifice and a feast, and terms for these prohibitions are known as kelo in local language. In other languages in Timor, one has the following examples: Tokodede -temi kdesi; Makasae -lubhu (or badu); Bunak -ucu bilik or ucu ai-tahan; Tetum Terik -kahe abat; Mambae -tar-badu; Fataluku -lupure.
There are reports mentioning that tara bandu is similar to other local wisdom in Indonesia known as lubuk larangan in Jambi at the island of Sumatra -a site of the ancient Srivijaya kingdom -though focusing more on areas around rivers, and registered as an Indonesian intangible legacy: lubuk larangan is considered unique in Indonesia (Benny et al., 2021). Notwithstanding, McWilliam (2011) mentions the existence of a comparable complex on Java, where the spirits of the place are designated penguasa, and the lord of the land (rai na' in in Tetum) is named tuan tanah. It is also known that the government of Indonesia banned -or, at least, strongly demotivated -the practice of tara bandu during the occupation of Timor (Carvalho, 2011, p. 61).
Next, we will discuss that tara bandu is a complex choreographic ritual, emerging from hybridizations of static and mute iconic frameworks of the Timorese tradition (kero/horok/bunuk) and the Portuguese choreographic and vocalized colonial bandos. The ritual also incorporates animist rites including animal sacrifices present in other Timorese estilus associated with agricultural harvests and fertility rites -like the sau batar ceremony linked to the harvest of corn -and, mainly in the post-independence period, combining a new dimension: an expression of the Catholic liturgical rites.
tHe "Kero" iConiC deviCe Thomas Sebeok (2001, p. 104) mentions that the magic efficacy of the kind of icon called effigy has long been recognized in ritual experience, and that the English word fetish was directly adopted from the Portuguese word "feitiço" -meaning charm, sorcery -originally applied to objects used by the people of West Africa coast as talismans and regarded by them with superstitious dread. Fetish(ism) is an example of semiosis -defined by Peirce as the action of signs -that overlaps several sign categories: the term was first coined by Charles de Brosses in 1757 and proposed as a general theoretic term for the primordial religion of mankind (Pietz, 1988), with a cornerstone in people attributing personality and intentional power to the impersonal realm of material nature.
In the late XIX century, the Scottish explorer and naturalist Henry Forbes spend several months in Timor-Leste (from late 1882 to 1883), travelling and reporting what he had seen or was told; when he was going to visit the rajah of Samoro in whose territory stood the Peak of Sobale [Soibada], he saw something that he depicted in a sketch (Figure 2), then stating: This ghastly sign-post, called a kero, had been erected as a warning to all thieves and offenders of the dire punishment that would be mercilessly meted out to them (…) who had been convicted of stealing fruit, as the bunch of cocoa, and pinangnuts hung on a railing below them indicated. The law of the different kingdoms is a lex non scripta, and has been handed down from generation to generation. The Leorei 8 is judge as well as king, but he acts only, however, on the rare occasions when a case is brought to him (…) (Forbes, 1885, pp. 472-73). Forbes (1885, p. 472) The word kelo is the proper designation for tara bandu in Oecussi local language 9 , and applied to extensive areas of many different individual owners, while an individual prohibition on specific trees is called bunuk 10 and only a tobe -empowered spiritual leader, in other places named lia na' in -can institute or lift a kelo, while any individual can place a bunuk (Meitzner-Yoder, 2007a).
Also, those terms can be considered associated to the Tetum term horok -this last one is translated into Portuguese as "feitiço" (Costa, 2000, p. 164) -a substantive meaning sorcery in English, and also prohibition; yet, as of today, both terms are still in use in Timor-Leste but with somehow different meanings: horok is used mainly to state a prohibition in an iconic framework at the spatial and social scale of a family property (like bunuk in Oecussi); while kero (gero in Kemak), at least in some places, denotes a mantra used to prevent negative situations, like heavy rain and flooding.
In this text we will use the term kero in the sense of the sketch depicted by Forbes shown in Figure 2. In fact, we are dealing with a language of iconic signs, disposed in a particular, vectorial way. Charles Morris (1938, p. 35) clarified that a language in the full semiotic sense of the term is any intersubjective set of sign vehicles whose usage is determined by syntactic, semantic and pragmatic rules; he also highlighted that syntactical rules determine the sign relations between sign vehicles relative to their disposition, while semantic(al) rules correlate sign vehicles with objects through meaning, and pragmatic(al) rules state the conditions in the interpreters under which the sign vehicle becomes a sign.
A description made by the Portuguese military José dos Santos Vaquinhas in a text written in 1885 -transcribed by Ricardo Roque (2012, p. 582)shows that he was referring to the kind of kero/horok framework, though he named it bando, as he wrote 11 : (…) orders issued by the chiefs for the knowledge of the people are usually by means of bandos, and when these orders have only effect in a particular place, and so that ignorance is not alleged in the location where they are given, next to the paths they are hung on a pole, or tied to a coconut tree or any tree, certain and certain objects, which by themselves indicate the species of the order; for example, a coconut leaf, a tree ram, a wooden sword, a rope and an egg, and other combined utensils, they serve to indicate the object of the order, of any prohibition, and even the importance of the fine that transgressors have to pay.
Also, referring to Timor, Alfred Wallace stated: (…) a prevalent custom is the pemali exactly equivalent to the 'taboo' of the Pacific islanders, and equally respected, and it is used in the commonest occasions, and a few palm leaves stuck outside a garden as a sign of the pemali will preserve its produce from thieves as effectually as the threatening notice of man-traps, spring guns, or a savage dog, would do with us. (Wallace, 1890, pp. 149-150).
AppLying peirCe'S typoLogy of SignS
The sketch of Figure 2 shows a vectorial structure like an arrow pointing upwards standing for an implication: whether someone steals the fruits shown hanging in the lower level of the structure, the punishment would be being impaled or decapitated, like is depicted in the upper level forming a triangle, with two effigies of heads at the lateral corners and an impaled figure at the central position. Vector and target are signs positively correlated (Casquilho, 2010): in this case, the kero is a vector that concatenates icons conveying a message, and the target is people to whom the message is addressed.
Charles Sanders Peirce conceived a Theory of Signs under the general scope of logic as semiotic. Peirce's general concept of sign was defined as follows: a sign or representamen, is something which stands to somebody for something in some respect or capacity (Buchler, 1955, p. 99).
A sign may be simple or complex; anything or phenomenon, no matter how elaborated, may be considered as a sign from the moment it enters into a process of semiosis (Everaert-Desmedt, 2020): this process involves a triadic relationship between a sign or representamen (a first), an object (a second) and an interpretant (a third). For instance, Boarccaech (2021) addressed the Peircean concept of interpretant -the mental effect of the sign in the interpreter -when referring to the multidimensional meaning(s) of lisan, in the case with a focus on the Humangili community in Ataúro island.
However, signs are almost always a mixture of types and, at most, one can elucidate the dominant type conveying a hierarchic articulation. Using Peirce's terminology, the ten categories of signs he has elaborated depicted in Figure 3, are anchored in a combination of triadic references, referred to: (i) the sign itself (qualisign, sinsign, legisign); (ii) the object (icon, index, symbol); and (iii) the interpretant (rheme, decisign 12 , argument).
The following figure shows the articulation herein proposed concerning the hybridization process: from a dicent indexical sinsign concerning the subject of Figure 2 to an argument (symbolic legisign) relative to tara bandu as it occurs nowadays. Buchler, 1955, p. 118): in red, the characterization of the "kero" depicted in Figure 2; in blue, the proposed evolution by hybridization of the sign tara bandu into a choreographic argument Retrieving Figure 2, one can say that the sketch of kero reveals a dicent indexical legisign, since it establishes a law associated with an implication: as previously mentioned, when read upwards, stealing the fruits entails punishment and, either the object(s) forbidden (the fruits), or the sanction(s), are revealed in an iconic way by direct similitude with the object. Remembering Peirce's words, a dicent indexical legisign is any general type or law which requires each instance of it to be really affected by its object in such a manner as to furnish definite information concerning that object; it must involve an iconic legisign to signify the information and a rhematic indexical legisign to denote the subject of that information; each [particular] replica of it will be a dicent sinsign of a peculiar kind (Buchler, 1955, p. 116).
In the "Sarzedas document" -transcribed by Affonso de Castro and containing a set of instructions issued in 1811 directed by the Count of Sarzedas, Governor of the State of India, to the Governor of Timor Cunha Gusmão -in paragraph 50 is made reference to the enslavement of Timorese due to their failure to pay the fines relative to bandos issued by local kings (Castro, 1867, p. 202). Also, Ricardo Roque (2012, p. 581) tells us that in 1895 the liurai (king) of Manufahi had proclaimed a bando forbidding all his subjects to reach an agreement with the Portuguese government.
on tHe portugueSe CoLoniAL BAndoS
It does not seem easy finding explicit references to the Portuguese colonial bandos issued in Timor, but one can refer to some (Figueiredo, 2011, pp. 210, 284, 380): in 1785 there was a letter of Vieira Godinho asking the Governor of India for a delivery of coins appropriated for small transactions to be used in Timor, and to be accompanied with the draft of the corresponding bando; also, it is known that governor Manuel Saldanha da Gama has used bandos to announce cash prizes for those who would establish coffee plantations with minimum requirements, ca. 1855; the same governor, ca. 1852-53, ordered a proclamation saying that those who, after the publication of the bando, still continue to marking his slaves would be subject to a punishment imposed by the government.
Ricardo Roque (2012, p. 572), referring to Timor, states the following: in the kingdoms, the governor's words were never just words, they constituted a heterogeneous ceremonial collective, called bandos, which brought together things invested with special authority (drums, flags, rifles, papers) and people invested with the statute of spokespersons (officials and delegates) or guardians (soldiers); also, bandos took place as a liturgical action of reading aloud an order from the governor, accompanied by the sound of a drum.
Concerning the etymology of bando, Adrian Poruciuc (2008) elucidates the subject: the word bando roots from Indo-European verb bhā meaning 'to speak', then becoming the Frankish and the old Germanic term bann and recorded in late Latin as bannum, meaning 'proclamation'; the term was associated with primitive Indo-European references to archaic religious-juridical notions, including title(s) of nobility and coin, then Latinized in medieval documents as banus, the root for several senses including 'order under threat of punishment'; later appears ban meaning 'proclamation, confiscation, prohibition' in Old French and Old Provençal, then entering neighboring Romance idioms; also, Fr./Prov. ban preserved the archaic meaning of 'public announcement', but it also acquired secondary meanings such as 'prohibition of harvesting'. Also, the word 'band', meaning a group of people, can be considered derived by metonymy.
It was a common practice for the Portuguese colonial power to rule issuing bandos, with different objectives. In Figure 4, one can see the apparatus of a Portuguese colonial bando in early XIX century at Rio de Janeiro, Brazil, as illustrated by Thierry Frères (1839), based on a drawing of Jean Baptiste Debret. Using once more Peircean concepts and terminology, such a ceremony is not anymore of iconic nature, but symbolic, where symbol stands for concepts established by convention and habit; it could be named a choreographic argument, using the X class of signs of Peirce's classification like depicted in Figure 3, anchored in the principle of compositionality of meaning which states that the meaning of every complex expression is determined by the meanings of its parts plus the mode of their combination (Peregrin, 2012). Remembering Peirce words: a sign whose interpretant represents its object as being an ulterior sign through a law, then its object must be general, that is, the argument must be a symbol; as a symbol it must further be a legisign, incorporating a law (Buchler, 1955, pp. 117-118).
on HyBridizAtion
In relation to the Timorese bandus, one has already seen that the Portuguese referred to the kero device naming it bando. From the testimony made in 1943 by another colonial military, Captain José Simões Martinhoalso transcribed in Roque (2012, p. 583) -describing the customary practice in the first half of the 20th century, he told 13 : (…) the bando -which no longer has easy compliance because it had become a function of any insignificant chief and even the owner of some mango trees -consisted in two stakes of approximately two meters high, supporting, horizontally placed, a stick from which hung the notice or edict; this was easy to read: a coconut still tender, for example; a small rope; a paddle; a goat foot and some eggshells indicated that it was prohibited to harvest coconuts, under penalty of imprisonment (rope), of slapping a few slaps (paddle) and to pay a fine (goat and eggs); the number of eggs it was sometimes indicated by a string with knots, one corresponding to each egg.
Thus, Captain Martinho mentions that such a regulatory injunction, the Timorese bandu, had lost pragmatic relevance through the trivialization of its use. Such a framework was typically iconic, static and mute. However, nowadays, tara bandu ceremonies have several dimensions associated, and the dynamic components, including sound and drums, are undoubtedly present, as one can see illustrated in Figure 5, below.
Also, If a law is meant to have compliance it should be considered relevant, then one should recall that Relevance Theory, which is a cognitive psychological theory, claims that the use of ostensive stimulus may create precise and predictable expectations of relevance, relevance therein being defined in terms of a trade-off of cognitive effects and processing effort (e.g. Wilson, 2017): other things being equal, the greater the cognitive effects and the smaller the processing effort, the greater the relevance, while every ostensive act communicates a presumption of its own relevance. Figure 5. An aspect of a tara bandu ceremony; Hera, Timor-Leste, 2017 (photo: authors) In fact, one can read in Forbes (late XIX century) that the drummer and the standard or flag -colonial tools and symbols, also remembering that the Portuguese word for flag is bandeira, derived from bando -were already assimilated and incorporated in the Timorese tradition as, when visiting the rajah (liurai) of Turskain in the slopes of Rusconna mountains, he noted 14 : (…) the katjeru, or royal drummer, is a hereditary official of high and coveted rank in the kingdom, for they hold that when Maromak made Timor he gave the people a standard-bearer to lead them to war, and a katjeru to walk beside him 'like man and wife'. (Forbes, 1885, p. 442) In Table 1 below, some differences between the kero/bandu concept and the colonial bando are contrasted. Hybridization of costumes and injunctions is a common feature in postcolonial regimes and one can mention some references: a hybrid turn more often engages with community, customary or more generally societal efforts regarding security, justice, peace, welfare, conflict resolution or governance (Brown, 2018), and a hybrid order refers to contexts where differing life-worlds are each co-represented to significant extents, with the focus on customary and modern (Grenfell, 2018).
-In Table 2 we list the main sequential phases of a tara bandu ceremony concerning participatory land use planning.
-In Figure 6, one can see a goat being sacrificed: the animist and sacrificial pole is attached to a cross indexing Catholic religion, thus forming a noticeable hybrid sign; it is from the horizontal axis of the cross that items are pending (hanging) symbolizing interdictions.
-In the same tara bandu cerimony, like depicted in Figure 7, the Catholic priest intervenes with a speech -a kind of homily -in another hybridization dimension of the ritual. Batara suco, 2015 (photo: authors) finAL noteS Tara bandu currently incorporates multicultural dimensions: in addition to the ancient animist practices, there are also traits of the colonial bandos and rites of Catholic liturgy, thus becoming an elaborated symbol and a semiotic hybrid. Work in sociolinguistics and anthropology often centers on how cultural level phenomena are reinforced, and even constructed, by discourse but also internal representations and thoughts (Yus, 2010). Babo-Soares (2004) highlights that Timorese exegeses depict the sequence of events in life in a configuration of ai-hun (tree-trunk), or 'tree', which in their minds, can be called ai (tree) only if it has abut (roots-origin) and tutun (tip-end).
In this text, one point is also remembering that the Portuguese presence and influence in Timor-Leste is another relevant dimension of understanding Timorese cultural frameworks, namely through hybridization processes. The notion of hybridity proposes an alternative lens that aims to move beyond normative notions and beyond dichotomous thinking that articulates states and non-states as discrete and independent actors and institutions (Jackson & Albrecht, 2018) and, as a metaphor, the expression relates to linguistic compositions from different languages or, more generally, everything that is composed of different or incongruent elements (Ackermann, 2012).
Tara bandu was reactivated and replicated after Timor-Leste independence in 2002 and is considered a community-based natural resource management process anchored on traditional sociopolitical structures (Browne et al, 2017), often directly related to events in the agricultural calendar, particularly crop production.
Tara bandu is mentioned currently as an example of positive bottom -up environmental based peacebuilding process, in what is considered a recent, and promising, field of action and research that have the potential to facilitate outcomes in several contexts (Miyazawa & Miyazawa, 2021;Ide et al., 2021), used to resolve both environmental and social issues while demonstrating a degree of hybridization between transnational and local norms and practices.
|
2022-11-19T16:37:41.673Z
|
2022-11-16T00:00:00.000
|
{
"year": 2022,
"sha1": "48077706af9b3824b607f72822193ce80a2fcdf2",
"oa_license": "CCBY",
"oa_url": "https://dialogosuntl.com/index.php/revista/article/download/35/280",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "e5f041ea01e01f6f8e205fdfc3800eb9756c7ac1",
"s2fieldsofstudy": [
"History"
],
"extfieldsofstudy": []
}
|
11159845
|
pes2o/s2orc
|
v3-fos-license
|
Water aggregation and dissociation on the ZnO ( 10 % 10 ) surface †
A comprehensive search for stable structures in the low coverage regime (0–1 ML) and at 2 ML and 3 ML using DFT revealed several new aggregation states of water on the non-polar ZnO(10% 10) surface. Ladder-like structures consisting of half-dissociated dimers, arranged side-by-side along the polar axis, constitute the most stable aggregate at low coverages (r1 ML) with a binding energy exceeding that of the monolayer. At coverages beyond the monolayer – a regime that has hardly been studied previously – a novel type of structure with a continuous honeycomb-like 2D network of hydrogen bonds was discovered, where each surface oxygen atom is coordinated by additional H-bonding water molecules. This flat double-monolayer has a relatively high adsorption energy, every zinc and oxygen atom is 4-fold coordinated and every hydrogen atom is engaged in a hydrogen bond. Hence this honeycomb double monolayer offers no H-bond donor or acceptor sites for further growth of the water film. At 3 ML coverage, the interface restructures forming a contact layer of half-dissociated water dimers and a liquid-like overlayer of water attached by hydrogen bonds. The structures and their adsorption energies are analysed to understand the driving forces for aggregation and dissociation of water on the surface. We apply a decomposition scheme based on a Born–Haber cycle, discussing difficulties that may occur in applying such an analysis to the adsorption of dissociated molecules and point out alternatives to circumvent the bias against severely stretched bonds. Water aggregation on the ZnO surface is favoured by direct water–water interactions including H-bonds and dipole–dipole interactions and surfaceor adsorption-mediated interactions including enhanced water–surface interactions and reduced relaxations of the water molecules and surface. While dissociation of isolated adsorbed molecules is unfavourable, partial or even full dissociation is preferred for aggregates. Nevertheless, direct water–water interactions change very little in the dissociation reaction. Dissociation is governed by a subtle balance between strongly enhanced water–surface interactions and the large energies required for the geometric changes of the water molecule(s) and the surface. Our conclusions are discussed on the background of the current knowledge on water adsorption at metals and non-metallic surfaces.
Introduction
2][3][4][5][6][7] Furthermore, zinc oxide is developed as active material for water splitting and photocatalysis. 8,9[12] However water adsorption, aggregation and wetting is strongly dependent on the subtle interplay of water-substrate and waterwater interactions and the availability of dangling OH-groups. 11,13he presence of water strongly modifies the surface properties depending on the substrate and coverage.For example, it may passivate dangling bonds and stabilize or destabilize reconstructions.Furthermore, adsorbed water can catalyse heterogeneous reactions and corrosion by proton transfer and solvating products and transition states.On the other hand water can also block active sites. 10,12The adsorbed water itself may have distinct properties differing substantially from bulk ice or water in structure, diffusivity, freezing point, dissociation degree and solvating properties due to the confinement in a thin layer, interactions with the substrate and the often epitaxially templated arrangement. 10,11,14,15issociation of adsorbed water is of particular interest for catalysis, as this may be the first step in the activation of water molecules for chemical reactions.
The water binding mechanism on solid surfaces has been reviewed previously, analysing different types of interactions including electrostatic ion-dipole and dipole-dipole interactions, dispersion interactions, and more specific chemical interactions. 10,11,16It is commonly agreed that the main interaction of water molecules with surfaces is due to the doubly occupied 3a 1 and 1b 1 water orbitals (lone pairs) hybridizing/ interacting with empty orbitals on metal atoms and cations, 11,14,[16][17][18][19][20][21][22] with partial charge transfer to the surface. 16,19,20,23In particular on non-metallic surfaces these interactions can be very strong resulting in a Lewis acid-base type chemical bond. 16The directional properties of the orbitals involved result in a preferred adsorption position slightly displaced from exactly on top the metal atom and an orientation of the molecule nearly parallel to the surface. 19,20In addition to this interaction with electron deficient centres, water molecules form hydrogen bonds between the OH groups and anions or other water molecules.
A lot of experimental surface science studies and theoretical work has focused on metals, in particular on well-defined single-crystal surfaces. 17,18,23The adsorption energy for water molecules on metals is in the range of 0.1 to 0.4 eV, increasing in the series Au o Ag o Cu o Pd o Pt o Ru o Rh. 14,15,[18][19][20] The Variation is due to the metal-water interaction, 24 which is weaker than the nearly constant water-water interaction or of comparable strength. 1519]21 Both is due to the electron withdrawing effect of the water-metal interaction.
Isolated water molecules easily diffuse on metal surfaces, even at low temperatures, facilitating aggregation and a special ''waltzing'' mechanism promotes diffusion of dimers. 14,15,19,25t very low temperatures and low coverage initially isolated molecules and small clusters are observed. 15,21,23However, with increasing temperature diffusion sets in, leading to water aggregates of various size stabilized by H-bonds. 17,239]21 One-dimensional (1D), chain-like aggregates have been observed on the Cu(110) surface. 26Density functional theory (DFT) calculations revealed that they are based on edge-sharing water pentagons adsorbed on top of Cu atoms, which are also predicted for Ni(110), while edge-sharing hexagons are more stable on the (110) surfaces of Pd and Ag. 26,27On Pd(111) at 100 K and coverages of B0.5 ML complex rosette or lace like structures are observed that are built from hexagons interconnected by additional H-bonded water molecules. 14,18,28arly studies of saturated water monolayers on metals suggested a buckled hexagonal bilayer structure similar to the basal plane of ice I h , based on the (O3 Â O3)R301 low energy electron diffraction (LEED) pattern and a saturation coverage of 2/3 ML. 17,18,23,29 The bilayer consists of an epitaxial hexagonal 2D-network of water molecules.Every second molecule is bound on top of a metal atom with a slightly upward tilted orientation as described above.The remaining molecules are not in direct contact with the metal and adsorbed by H-bonds with the lower layer.DFT calculations supported this structure and showed that such hexagonal networks of H-bonded water can adapt to a wide range of substrate lattice constants with relatively low strain energies. 24However, more recently high resolution scanning tunnelling microscopy (STM) revealed a more complex behaviour of adsorbed water layers and a rich variety of structures, which comprise also 5-and 7-rings and other defects. 17n Pt(111) 2D-layers of water with (O39 Â O39)R16.11 and (O37 Â O37)R25.31 have been observed, while the (O3 Â O3)R301 bilayer structure is stable only in small domains at 85 K and becomes stable in multilayers at coverages 42 bilayers. 19On Ni a (2O7 Â 2O7)R191 monolayer has been observed, which reorders into an incommensurate ice film for multilayers. 18These complex structures are due to a subtle competition of water-water H-bonding (number of H-bonds, optimum distances, angles) and water metal bonding (epitaxial on top position, parallel orientation), which leads to the stunning manifold of structures. 17Recently a unified model has been proposed that accounts for the relative stability of the different patterns on close-packed hexagonal surfaces. 30The adsorption energy was decomposed into waterwater and water-metal interactions and expressed as parametric function of the lattice deformation.This approach allows predicting the most stable structure for Pd, Pt, Ag, Au, Ir, and Rh, Ru close packed surfaces.Besides layers of intact water molecules also mixed OH/H 2 O layers with (O3 Â O3)R301 were observed, which are flat in contrast to the buckled bilayer of intact water molecules. 18,20,31Water dissociation is controlled by the metal-OH bond strength. 24n the high coverage regime at low temperatures, where diffusion is too slow, metastable amorphous water films are formed. 17,18At higher temperature two limiting behaviours are observed.On the (111) surfaces of Ni and Pt, for example, incommensurate bulk ice films are observed, indicating that the interface has restructured with preferred orientation along metal rows.On the other hand Ru(0001) has a tightly bound (O3 Â O3)R301 first wetting layer, which does not restructure upon further water adsorption.Therefore, de-wetting occurs and 3D ice clusters form on the persistent monolayer.There are two prerequisites for wetting and growth of multilayers: strong H-bonding of multilayer ice due to the presence of free OH groups (or via easy restructuring) and a suitable lateral registry to match the 3D-structure of ice. 13,17,18,29n the case of non-metallic substrates, the information is much more scarce and scattered.For ionic compounds, the trends in water monomer adsorption and dissociation on flat (100) surfaces of a broad range of alkaline earth oxides and sulphides, as well as alkali fluorides and chlorides with rocksalt structures has been studied with DFT. 32On the surfaces of these mostly ionic compounds, the anions and cations, which are octahedrally 6-fold coordinated in the bulk, are arranged in a checkboard pattern and have 5 nearest neighbours.Adsorption of intact isolated water molecules is favoured on most substrates including all alkali halides, the alkali earth sulphides and MgO, while dissociation is preferred only on the heavier oxides CaO, SrO and BaO.In the dissociated structures, one hydrogen is transferred to a surface oxygen and the hydroxyl group adsorbs This journal is © the Owner Societies 2017 above an adjacent metal ion with the OH-bond directed towards the vacuum.The adsorption energies increase from À0.9 eV for CaO to À1.5 eV for BaO.For molecular adsorption different types of structures have been found.On MgO, MgS, CaS, LiF, LiCl, and NaCl, the water oxygen is above a metal ion with a slight lateral displacement and the molecular plane is almost parallel to the surface, as described in the beginning.On the other hand, the water molecule is on a hollow site with the hydrogens facing downward and forming two H-bonds for SrS, BaS, NaF, KF, RbF, KCl, and RbCl, and one H-bond for CsF and RbF.The adsorption energy of intact water molecules on cations decreases with the size of the metal ion, while the strength of the H-bond to the surface anion increases with the size of the cation.The opposing trends lead to the observed cross over from metal-water oxygen binding to H-bonding at the anions.
For the covalently bound semiconductors with tetrahedral coordination and diamond, zinc-blende or wurtzite structure, passivation of the partially filled dangling bonds at the surface is critical.][35][36][37] They allow to understand the complex reconstructions of Silicon as well as the stoichiometry of surface vacancies, adatoms and adsorbates on polar surfaces of III-V, II-VI and I-VII compound semiconductors.For the non-polar surfaces of compound semiconductors an auto compensation mechanism leads to a buckling of the surface dimers or chains.Charge transfer from the cation dangling bonds to the anion dangling bonds leads to fully occupied and completely empty orbitals.The outward relaxation of the anions increases the s-character of the completely filled dangling bond lowering its energy, while the inward relaxation of the cations, which approach an almost planar 3-fold coordination increases the p-character of its empty dangling bond orbital pushing it higher into the conduction band.This opens a band gap in the surface states.4][45] At 1 ML, a structure with half-dissociated water dimers is favoured on ZnO in contrast to GaN. 44 The trend of decreasing dissociation of the contact layer is also observed at the interface with bulk water.For GaN(10% 10) 80-100% dissociation was calculated, 8,46,47 while for ZnO(10% 10) a dissociation degree slightly larger than 50% was predicted based on DFT-molecular dynamics. 8,48ggregation of water was also reported on the surfaces of ionic and semiconducting materials.On NaCl(100) tetrameric water clusters have been reported recently as basic building blocks by cryogenic STM and DFT. 49The water molecules are bound on top Na + ions oriented parallel to the surface and H-bonded in a cyclic fashion.These tetramers can form chain-or flake-shaped larger clusters and a 2D layer via additional linking water molecules that are accepting H-bonds from two neighbouring tetramers and donate one H-bond to a chloride anion.In addition 1 Â 1 bilayers with 2 ML coverage and c(4 Â 2) overlayers with 1.5 ML coverage have been observed on NaCl(100). 15,50On MgO(100) water clustering and wetting layers can coexist. 15At low coverage a c(4 Â 2) structure with 1.25 ML coverage was observed, which transforms at 185 K into a monolayer with p(3 Â 2) structure that is stable up to 235 K. 15,51 At 1 ML coverage 2 of the 6 water molecules are dissociated and the water molecules and OH groups are bound to Mg cations, while in the c(4 Â 2) structure the dissociated OH groups have a larger distance from the surface and are H-bonded by four water molecules occupying the Mg-sites.Apart from isolated molecules, small clusters and 2D-layers, 11,15,16 also some examples of 1D-aggregates have been reported.On TiO 2 surfaces water aggregation follows the anisotropic row-structure of the surface. 52,53On the other hand, on CaO(100) a thermodynamically stable phase of extended 1D-clusters was reported that breaks the 4-fold symmetry of the surface. 54Symmetry-broken half-dissociated tetramers direct the linear growth and the larger lattice constant of CaO destabilizes a 2D-layer as on MgO, hence a 1D-structure is preferred.
ZnO crystallizes in the polar wurtzite structure with hexagonal symmetry and has four low Miller index cleavage surfaces: cleavage perpendicular to the polar c axis leads to two distinct surfaces, the Zn-terminated (0001) surface and the O-terminated (000% 1) surface, while cleavage along the c axis results in the nonpolar surfaces (10% 10) and (11% 20) with a stoichiometric surface termination. 79][60][61][62][63][64][65][66][67][68] In ZnO powders, the nonpolar facets contribute 80% of the surface. 7The clean, low index, non-polar (10% 10) and (11% 20) facets have been subject of many experimental and theoretical investigations.Different surface science experimental techniques including LEED, 58,69 high resolution transmission electron microscopy (HRTEM), 70 X-ray photoelectron spectroscopy (XPS), 71 as well as electronic structure calculations 9,[72][73][74] have revealed that these stoichiometric surface terminations are auto passivating.Upon cleaving the crystal, an empty dangling bond at the Zn cation and a fully occupied dangling bond at the O anion are created. 73To lower the surface energy, the cation rehybridizes from sp 3 towards sp 2 and moves downward by about 0.34 Å until it lies nearly in the plane of its three anion neighbours.The anion, on the other hand, almost stays in a bulk-like position while its bond angles decrease leading to an increased s-character of the lone pair.The result is a tilt of the ZnO surface dimers by about 121 and a dimer bond length contraction of roughly 7%, in good agreement between experimental observation 69,70 and theoretical prediction. 73he adsorption of water on ZnO has captured attention for many years due to the relevance for important catalytic processes including the water gas shift reaction and methanol synthesis. 7wicker and Jacobi used thermal desorption spectroscopy (TDS) and XPS to study the adsorption and condensation of water on ZnO single crystal surfaces. 75,76Their studies identified several adsorption states of water with different desorption energies on the ZnO(10% 10) surface depending on the exposure.Based on the atomic and electronic structure of the clean ZnO(10% 10) surface discussed above, favourable interactions of water molecules with this surface were predicted as either a molecular adsorption with the water oxygen atoms coordinated to surface Zn atoms and hydrogen bonds between the water molecules and surface O atoms, or a dissociative adsorption with H atoms and OH groups saturating the dangling bonds of the O and Zn atoms at the surface, respectively. 44[77][78][79] Intact isolated molecules bind to the surface via a strong Zn-O covalent bond between the Zn cation and one of the water lone pairs and donate one H bond to the surface oxygen across the trench.
Formation of a water monolayer on the ZnO(10% 10) surface has been reported in many studies.Experimental observations based on He-atom scattering (HAS), low-energy electron diffraction (LEED), STM images 77,80 and high resolution electron energy loss spectroscopy (HREELS) 81 have agreed with the formation of a 2D water superstructure with (2 Â 1) periodicity, having a long-range order and existing up to 340 K in ultra-high vacuum conditions.In the STM images, 80 domains with halfdissociated (2 Â 1) and fully molecular (1 Â 1) periodicity of water molecules coexist, together with a third domain with (2 Â 1) periodicity, but less corrugated than the half-dissociated one.Though this last domain could not be assigned from the DFT calculations, several studies using DFT, [43][44][45][77][78][79][80][82][83][84] and REAXFF 78 have confirmed the prevalence of the half-dissociated (2 Â 1) domain over the (1 Â 1) molecular structure and in some cases predicted the existence of a monolayer of fully dissociated water molecules with (1 Â 1) periodicity binding to the surface as strong as the (1 Â 1) molecular monolayer. Half-dissocitive adsorption may occur as a compromise between the steric repulsion and covalent and hydrogen bond formation with both the substrate and the impinging molecules.82 The driving force for dissociation was attributed to the hydrogen bond interactions, which gain in strength for increasing coverage, leading to almost degenerate molecular and dissociative adsorption modes at monolayer. 84The presence of these water-water hydrogen bonds is the key issue that drives the stabilization of the adlayer and the dissociation process that occurs at high coverages.82 The calculated energy barrier per water molecule to go from a molecular to a dissociated monolayer is 0.02 eV, while there is no barrier to go from a molecular to a half-dissociated monolayer.44,78,84 Furthermore, the possibility of domains with mixed (2 Â 1) and c(2 Â 2) structures of half-dissociated molecules was also predicted.44,79 In the high coverage regime, water films have been addressed using ReaxFF calculations.85 High dissociation degrees (80%) and proton transfer reactions between water molecules and hydroxyl via a Grotthuss-like mechanism in the contact layer have been reported.Very recently, important contributions to the microscopic understanding of the liquid water/ZnO(10 % 10) interface were made 8,48 by comparing the interface structure and proton dynamics of a water monolayer and thick water layer.Using ab initio MD, 50% dissociation was found in the contact layer of a liquid water film as well as in the monolayer.Due to H-bond fluctuations that lower the proton transfer barrier, a higher rate of dissociation and recombination was found in the contact layer of the liquid film compared to the monolayer.
Though the adsorption of water on the ZnO(10% 10) surface has been studied for isolated single molecules, monolayers, and to some extent for the interface with bulk liquid water, no information is available about the aggregates that may form in between these three coverage regimes, as well as on the coverage dependence of their binding energies.Therefore, we investigate the questions of water aggregates at low coverage and of multilayer formation.Furthermore, we analyse the mechanisms stabilizing such aggregates as well as the trends in the binding energy and dissociation degree with respect to coverage.We present a comprehensive and systematic search of all possible aggregates by successively increasing the coverage of water molecules up to 1 ML, using density functional theory (DFT).Molecular, dissociative and partial dissociative adsorption modes are considered.For all aggregates, we perform a thorough search of energy minima and investigate the coverage dependence of their adsorption energy up to the limit of formation of higher aggregates.The coverage regime between monolayer and thick films representing the interface with bulk water has not been studied with DFT previously, although TDS spectra clearly show distinct desorption peaks between those assigned to bulk ice and the monolayer. 75We present first results on interfaces with 2 ML and 3 ML water coverage based on small unit cells, which was motivated by the observation of (2 Â 1) periodicity at 1 ML 77,80,81 as well at the contact layer with bulk water. 8,48In order to gain insight into the driving forces for aggregation and dissociation, the strength of surface-water and water-water interactions, as well as the modification of water molecule and ZnO surface geometries and corresponding energies are quantified using a thermodynamic cycle to decompose the adsorption energies.The impact of water adsorption on the passivation of dangling bonds, surface states and the band gap are also analysed.
Methods
This journal is © the Owner Societies 2017 intact water molecules, as well as, dissociation into an OHgroup bound at zinc sites and a hydrogen adsorbed at an oxygen site was considered.In addition, partial dissociative adsorption forming aggregates consisting of a mixture of intact and dissociated molecules were included.Furthermore, double dissociation of water into a Zn-bound oxygen and two adsorbed hydrogen atoms was tested.Formation of aggregates along the polar axis (rows) and along the trenches (columns) was studied.The latter arrangement offers the possibility of intermolecular hydrogen bonds. 44At 2 ML and 3 ML coverage, the number of possible structures and the complexity of the H-bonding networks preclude such an approach to rationally construct all possible arrangements.Therefore, MD runs were used to sample the configuration space of low-energy structures.This also avoids the limitation by preconceived structural concepts inherent in manually generated structures.The MD trajectories § were calculated for a total simulated time of 70 ps and 60 ps at 2 ML and 3 ML, respectively.Snapshots were taken at regular intervals (ca.1000 fs) and the corresponding structures optimized with VASP.In the high coverage regime, a relatively small supercell (2 Â 1) was chosen to generate small periodic model structures suitable for accurate electronic structure calculations.The (2 Â 1) periodicity was also found at the interface with bulk water. 8,48The MD was not meant to simulate a multilayer and its dynamic behaviour at room temperature, since a realistic interface would probably be much more disordered and require a significantly larger supercell.However, the small model structures should be sufficient to give a first insight into the energetics of water multilayers and the main structural features.
The DFT calculations employed the generalized gradient approximation (GGA) exchange correlation functional of Perdew, Burke, and Ernzerhof (PBE) 86 because of its established good accuracy in predicting equilibrium structures and binding energies of adsorbates on ZnO surfaces, 44,77 as well as for hydrogen bonded systems. 87GGA functionals underestimate the band gap of ZnO, which is 0.73 eV for PBE, compared to an experimental band gap of 3.37 eV. 5 The impact of this deficiency on adsorption energies and structures was tested using the hybrid DFT functional SHE06, 88 which improves the description of the electronic structure by inclusion of exact exchange into Kohn-Sham DFT and calculates a band gap of 2.48 eV for ZnO.Test calculations showed that the adsorption energies slightly increase in a systematic way by 8-13% (see ESI, † Table S1).However, the general trend in the relative stability of undissociated, partially dissociated and dissociated structures remains unchanged.Thus more advanced methods are required for an accurate description of the band structure, however, PBE is sufficient for adsorption structures and energies.This observation agrees with recent results for water adsorption on CeO 2 and H 2 S adsorption on ZnO calculated using GGA+U or hybrid DFTs. 89,90PBE calculates structures and adsorption energies for water on ZnO in good agreement with experiment 44,77 and these properties are not significantly affected by the underestimated band gap.In particular, the trends in relative stability are preserved.In view of the large computational effort necessary for hybrid DFT calculations, the present study involving many large structures was performed with the computationally more efficient PBE functional.The Vienna ab initio Simulation Package (VASP) 91 was used with the PAW 92,93 method to treat the electronnuclei interactions.The expansion of the electronic wave functions was truncated at a kinetic energy cutoff of 550 eV.For integration inside the Brillouin zone, the tetrahedron approach with Blo ¨chl corrections was used with a Monkhorst-Pack scheme sampling based on a 1 Â 6 Â 4 mesh for the (1 Â 1) unit cell and corresponding smaller grids for supercells according to the band folding.This assures that adsorption energies calculated for different unit cells are directly comparable.For density of states (DOS) analysis, the k-point mesh was refined to 1 Â 16 Â 10 and the back side of the slab was passivated with pseudo hydrogens to achieve a flat electrostatic potential in the bulk region.The energy scales were aligned according to this bulk electrostatic potential with the bulk valence band maximum at 0 eV.
The ZnO surface was modelled using slabs of 8 layers (16 atoms in the primitive unit cell) separated by 17.2 Å vacuum.The bottom half of the slab was kept frozen in bulk configuration, while the top half was fully relaxed together with the adsorbates.The quasi-Newton minimization algorithm (after initial conjugate gradient) was employed for structure optimization with a convergence criterion of 0.2 Â 10 À3 eV Å À1 for the Hellmann-Feynman forces.The asymmetry due to freezing the bottom half of the slab and water adsorption on only one side lead to a dipole moment, which was compensated by a dipole correction to annihilate the electric field gradient in the vacuum.The estimated deviation of binding energies from a fully converged result is r0.01 eV for this slab and computational setup.For all important adsorbate structures phonons were calculated to confirm them as true minima and not artefacts of an imposed translational symmetry.
The small displacement method as implemented in the PHON code 94 was utilized to calculate the OH stretching frequencies.In each case, at least a (2 Â 2) supercell was used with a 1 Â 10 Â 10 q-point mesh.Atoms in the relaxed part of the slab were displaced by 0.01 Å and the acoustic sum rule was applied to insure the translational invariance of the supercell.The root mean square deviations (RMSD) of the frequencies with respect to the displacement amplitude, the k-point grid and the cut-off value for a (2 Â 2) supercell are o1 cm À1 and the RMSD with respect to the supercell size is o18 cm À1 .
The adsorption energy per water molecule was calculated using eqn (1).
where An insight into the different contributions to the adsorption energy may be gained from the following decomposition based on a Born-Haber thermodynamic cycle.The desorption process may be divided into four steps: (1) Separation of the adsorbate layer from the surface without any geometrical changes.This step estimates the water-surface interaction energy without other contributions (eqn (2)).
(2) Separation of the layer of n water molecules into isolated molecules, still frozen in the adsorbed geometry.This defines the water-water interaction energy (eqn (3)).
(3) Relaxation of the isolated water molecule(s) from the adsorbed structure to that of a molecule isolated in vacuum.This quantifies the energies required for the geometry changes of the water molecule(s) (eqn ( 4)); the average water relaxation energy enters the binding energy.
(4) Relaxation of the surface from the structure optimized with adsorbates to the clean surface, quantifying the energy required to modify the structure of the ZnO substrate (eqn ( 5)).
are the total energies of the separated substrate and adsorbate layer calculated in the supercell with their atoms frozen in their adsorbed configurations.E Ã;i Þ is the total energy of a water molecule frozen in the adsorbed geometry but calculated in the big unit cell.
As the decomposition is based on a thermodynamic cycle, the sum of the four terms corresponds to the adsorption energy E ads as shown in eqn ( 6): This decomposition scheme has been previously applied for molecular adsorption of water on the ZnO(10% 10) surface. 44In this study, the scheme is extended to full and partial dissociative adsorption.For water molecules with O-H distances 41.6 Å spin polarization was taken into account.With this provision, PBE calculates a bond dissociation energy of 5.51 eV in good agreement with high-level calculations and the experimental value (5.29 eV and 5.46 eV, respectively) and reproduces the bond dissociation curve well over the whole range (see ESI, † Fig. S1). 95The surface relaxation has been discussed in the context of water dissociation on GaN(10% 10). 45In the case of dissociatively adsorbed water molecules, the water-surface interaction energy and the water relaxation energy should be analysed with caution, because the underlying assumption that the adsorbed state can be separated into two parts without substantially changing their properties may break down in some cases.The electronic structure of an OH group and a hydrogen atom bound to the ZnO surface may be quite different from the ''dissociated'' water molecule held in the same frozen geometry in vacuum.The latter is probably best described by a very long covalent bond between OH and H, while the dissociated hydrogen atom on the surface is bound to a surface oxygen and the OH-group to a Zinc atom with little electron density in the region between H and OH.The severely stretched covalent O-H bond results in high DFT energies for the frozen .Problematic cases will be pointed out in the discussion and the differences in electron density will be analysed.
Isolated single molecules adsorption
In order to investigate the driving forces for water aggregation on the surface, the adsorption of an isolated single molecule may be taken as reference, as water-water interactions may be neglected in this situation and binding is governed only by interactions with the surface.More than 25 starting configurations have been constructed.These included undissociated, dissociated and doubly dissociated water molecules, adsorbed on top surface zinc-sites, bridging two zinc-sites and/or H-bonding to surface oxygen sites.Full optimization resulted in two molecular (Fig. 1a and b) and three dissociative (Fig. 2a-c) adsorption configurations with exothermic (negative) adsorption energies.Structures with doubly dissociated water molecules are higher in energy than a gas-phase water molecule and the relaxed clean surface (see ESI, † Fig. S2) and are not further discussed.
For molecular adsorption, the best adsorption configuration is I-M1 (I = isolated, M = molecular, À0.98 eV, Fig. 1b) in which the water molecule binds via a strong covalent bond (Zn-O W = 2.076 Å, Table 1) between its oxygen and a surface Zn atom.The bond length is comparable to bulk zinc oxide (2.012 Å) and restores a 4-fold, nearly tetrahedral coordination at the Zn atom.Furthermore, an H-bond (H W Á Á ÁO S = 1.517Å) to a surface oxygen located on the nearest ZnO dimer across the trench is formed, coordinating the doubly occupied dangling bond orbital.This adsorption energy of the water molecule is very similar to the relaxed surface energy (0.93 eV per unit cell), which corresponds to the formation of one pair of dangling bonds.Thus the strength of the bonds formed by water adsorption is comparable to the bonds in bulk ZnO, illustrating the high degree of passivation of the surface dangling bonds upon water adsorption.In the second molecular adsorption structure, I-M2, the water molecule is flipped with respect to I-M1 and forms an H-bond with the surface oxygen of the same ZnO dimer.The Zn-O W bond and H W Á Á ÁO S H-bond are longer (2.180 Å and 1.814 Å) than in I-M1 and hence expected to be weaker according to the empirical bond-length bond-strength relationships. 96oreover, the O W -H W Á Á ÁO S angle f of the H-bond deviates more from linearity in I-M2 (132.31)compared to the angle in I-M1 (161.81).This less favourable geometry corresponds to a weaker adsorption energy (À0.60 eV) and a weaker watersurface interaction energy E interaction (ZnO/W) = À0.73 eV in I-M2 (À1.39 eV in I-M1).The water-water interaction energies are negligible for both structures, confirming that a (3 Â 2) supercell measuring 9.9 Å Â 10.6 Å is sufficient to avoid interactions of periodic images.The O-H bond involved in hydrogen bonding stretches considerably in I-M1 (O W -H W = 1.049Å vs. 0.972 Å in gas phase) due to the stronger H-bond compared to I-M2 (1.006 Å) in agreement with the higher water relaxation energy E relaxation (W) = 0.14 eV versus 0.04 eV.Also the relaxation energy of the ZnO substrate is larger in I-M1 (0.27 eV), showing that the surface and the water molecule undergo considerable changes in their geometries to optimize their interaction in contrast to the weaker effects in I-M2 (0.10 eV).
In the case of dissociative adsorption, the most stable configuration is I-D1 (D = dissociated, E ads = À0.89eV, Fig. 2a).The OH-group sits in a bridging position between two neighbouring ZnO surface dimers and makes two covalent bonds with the surface Zn atoms (Zn-O OH = 2.012 Å and 1.999 Å).One H atom is transferred to a surface oxygen of one of the bridged dimers.When dissociation occurs across the trench, as in the second configuration I-D2 (Fig. 2b), these bonds lengthen (Zn-O OH = 2.072 Å and 2.042 Å) and the binding energy is reduced to À0.60 eV.In the third configuration I-D3 (Fig. 2c, E ranging from À7.13 to 7.94 eV (Table 2).Formally, this consequence of calculating E interaction (ZnO/W) from frozen geometries may be accounted for by combining E interaction , which approximately corresponds to adsorbing and dissociating a relaxed gas-phase water molecule on the frozen ZnO surface (neglecting water-water interactions).The resulting value of À2.47 eV for I-D1 emphasizes the high strength of this = 1.58 eV for I-D1 compared to 0.27 eV for I-M1.Thus, the gain in water-surface interaction energy due to dissociation is less than the increase in relaxation energies required to form the dissociated structure and therefore the dissociation of an isolated water molecule is not favourable on ZnO(10% 10).
Our results for structures I-M1, I-M2 and I-D1 agree with those reported previously, 43,44,79,82 however, configurations I-D2 and I-D3 are new.Meyer et al., 44 have studied the adsorption of isolated water molecules on ZnO in great detail considering also many high-symmetry configurations.They reported 9 different configurations including 2 dissociated and 7 molecular absorption structures.Hellstro ¨m, et al., 43 found only four minima and noted that the higher energy adsorption configurations of Meyer, et al., 44 converged to one of their four configurations after optimization.
Aggregation
While the adsorption of isolated water molecules on ZnO(10% 10) has been reported in several works, 43,44,79,82 formation of water clusters and further aggregation has not been studied, although this is an important process observed on many surfaces, including metals and oxides. 11,15,16,21In the following, the direct and substrate-mediated interactions of water molecules driving aggregation will be analysed as well as the driving forces for dissociation.
3.2.1 Dimers.The first step of water aggregation is the formation of a dimer.When two water molecules adsorb on neighbouring Zn-sites, they form an additional H-bond (1.429 Å) between them (Fig. 1c), while both molecules retain their Zn-O W bonds and H-bonds to the oxygens across the trench.Due to the additional water-water interaction, the adsorption energy is enhanced by À0.03 eV and increases to À1.01 eV per molecule.The binding energy decomposition indicates an even stronger direct water-water interaction energy of À0.09 eV.The difference is due to indirect repulsive interactions mediated by the interaction with the surface and the relaxation energies: the interaction with the surface is slightly weaker (À1.35 eV versus À1.39 eV) in line with longer Zn-O W and H W -O S distances (2.086 Å and 1.622 Å for the donor and 2.145 Å and 1.996 Å for the acceptor molecule in the dimer compared to 2.076 Å and 1.517 Å in the monomer).Furthermore, in order to make a short intermolecular H-bond, the two molecules move closer by shifting from the optimal position and bending the angles of the Zn-O W bond with the substrate.Finally, the water relaxation energy is larger (0.16 eV versus 0.14 eV) in the dimer as compared to the isolated molecules.
The water molecule accepting the H-bond is further polarized elongating the O-H bond that forms a H-bond to the surface to 1.084 Å (1.049 Å in the isolated molecule) and now easily transfers one proton to the surface oxygen.The resulting structure of a half-dissociated dimer is shown in Fig. 3a.Its binding energy is À1.03 eV.Thus for dimers, dissociation of the acceptor molecule is exothermic by 0.02 eV, in contrast to the strongly endothermic dissociation of isolated molecules.According to the energy decomposition (Table 3), the waterwater interaction energy is À0.10 eV and comparable to the molecular dimer.Therefore, the driving force for dissociation of the dimer is not due to direct water-water interactions.Actually the intermolecular H-bond between the intact water molecule and the OH group is 1.677 Å, longer than in the molecular dimer.On the other hand, the bonds to the surface Zn atoms are shorter with 2.059 Å for the water molecule and 1.997 Å for the OH-group.The interaction energy with the surface, based on the decomposition scheme with frozen structures, is À3.15 eV per molecule, more than twice as large as for the molecular dimer.However, this is affected by the high relaxation energy of the dissociated molecule (+3.30eV).Combining the surface interaction with the average water relaxation energy gives E interaction (ZnO/W) + E relaxation (W) = À1.47 eV for the half-dissociated dimer compared to À1.19 eV for the molecular dimer.According to both analyses, the surfaceinteractions strongly favour dissociation.On the other hand, the surface relaxation energies counteract dissociation with E relaxation (ZnO) = 0.53 eV for the half-dissociated dimer versus 0.26 eV for the molecular dimer.Therefore, partial dissociation of the adsorbed water dimer is favourable because the enhancement of the surface-interactions is larger than the increase of the relaxation energies.A fully dissociated dimer could not be found.All starting structures converged to a half-dissociated dimer after optimization.
3.2.21D-chains.The recent evidence of 1D-chains of water adsorbed on metal 26,27 and oxide surfaces [52][53][54] has motivated the investigation of the possible formation of such aggregates on the ZnO(10% 10) surface.Two different types of 1D aggregates may form due to the surface anisotropy: columns along the trenches and rows along the polar axis.Therefore, comparing water aggregation along the rows and columns may allow deeper insight into the subtle interplay of direct interactions between water molecules and indirect water-water interactions mediated by adsorption on different surface structures.
Table 1 Binding energies, their decomposition and selected geometrical parameters of molecularly adsorbed water as isolated molecule (I-M2 and I-M1), dimer (I-MM), row (R-M), column (C-M) and monolayer (1ML-M) on the ZnO(10 % 10) surface a When water molecules arrange in a column along the trench, they may form an extended H-bonded chain as illustrated in Fig. 1d, where every molecule donates and accepts one intermolecular H-bond.This results in a similar binding energy (À1.01 eV) as for the isolated molecular dimer and corresponds to a stabilization of 0.03 eV relative to isolated molecules.However, the underlying ZnO substrate imposes a very long intermolecular H-bond of 2.429 Å in the column, compared to 1.492 Å in the dimer, where bending of bond-angles allows to optimize the intermolecular distance.Nevertheless, the energy decomposition reveals a strong water-water interaction of À0.12 eV in the column, compared to À0.09 eV per molecule in the dimer.Bearing in mind that the dimer has only one H-bond per two molecules, its bond strength is À0.18 eV, which is 50% more than that of the long H-bond of the column.The water-ZnO interaction is weaker in the column with À1.24 eV, compared to À1.35 eV per molecule in the dimer and À1.39 eV in the isolated molecules.This reduces the stabilizing effect of the direct water-water interaction.On the other hand, reduced relaxation energies of the water and zinc oxide stabilize the column relative to adsorption as dimer or isolated molecule (Table 1).
Structures
As an alternative structure, every second molecule in the column of water molecules may dissociate resulting in a column of half-dissociated dimers as shown in Fig. 3d.The binding energy is À1.01 eV as for the undissociated column.This contrasts with the isolated dimers, where half-dissociation was favourable.A completely dissociated column of water molecules (Fig. 2d) is considerably less stable with a binding energy of only À0.68 eV.The energy decomposition and structure is given in Table 2, but will not be further discussed.The arrangement of half-dissociated dimers in a column leads to a weak effective repulsion (E ads = À1.01 eV for the column, compared to À1.03 eV for the isolated dimer) in spite of an enhanced water-water interaction of E interaction (W/W) = À0.12 eV in the column, compared to À0.10 eV in the dimer.However, the H-bond donated by the water molecule to the OH-group is 1.685 Å, slightly longer in the column than in the isolated dimer (1.677 Å) and thus not enhanced.Furthermore, the H-bond donated by the OH-group to the water molecule of the neighbouring dimer is 3.409 Å, even longer than in the undissociated column.The bonds with the surface, Zn-O W = 2.079 Å, Zn-O OH = 2.008 Å, H S -O S = 1.039Å and the H-bond H W Á Á ÁO S = 1.739Å are all longer in the halfdissociated column than in the isolated dimer.This agrees with the weaker water-surface interaction E interaction (ZnO/W) = À2.99 eV in the column, compared to À3.15 eV in the dimer resulting in a significant surface-mediated repulsion.On the other hand, the reduced relaxation energies for the surface and water molecules (E relaxation (ZnO) = 0.52 eV and average E relaxation (W) = 1.58 eV in the column, versus 0.53 eV and 1.68 eV in the isolated dimer) contribute an indirect attraction.Thus the weak repulsive interaction between half-dissociated dimers aligned in a column is due to the weaker interaction with the surface, which more than compensates the attractive contributions of direct water-water interactions and relaxations.
When water molecules occupy every Zn-site along the polar axis forming a row as shown in Fig. 1e, they cannot form hydrogen bonds among each other due to the larger distance imposed by the substrate and the orientation imposed by the H-bond across the trench.Nevertheless, the binding energy is À1.02 eV, slightly stronger than in the column (À1.01 eV) and isolated molecules (À0.98 eV).This effective attraction of À0.04 eV relative to isolated molecules corresponds to a direct water-water interaction of À0.03 eV according to the energy decomposition, which may be due to dipole-dipole interactions.The Zn-O W bond of 2.068 Å and the hydrogen bond H W Á Á ÁO S = 1.441Å with the surface are shorter than in the isolated molecule (2.076 Å and 1.517 Å, respectively) and indicate an enhanced interaction with the surface in agreement with an increased E interaction (ZnO/W) = À1.51 eV, compared to À1.39 eV.This À0.12 eV indirect attraction is, however, compensated by an increase in the water relaxation (0.26 eV in the row, compared to 0.14 eV for the isolated molecule), which has the same magnitude.The relaxation energy of the surface does not change.
Table 3 Binding energies, their decomposition and selected geometrical parameters of partially dissociated water adsorbed on the ZnO(10 % 10) surface as isolated water dimer (I-MD), half-dissociated column (C-MD), ladder-like row of dimers (R-MD), half-dissociated monolayer (1ML-MD), and the most stable structures with 2 ML and 3 ML of water a Dissociation of the above row of molecules is very favourable resulting in a high binding energy of À1.06 eV.This contrasts with the unfavourable dissociation of isolated molecules and columns and illustrates the sensitivity of water dissociation to the detailed interactions with the surface and neighbouring water molecules -including indirect surface-mediated interactions.The molecules in the row dissociate by transferring the hydrogen, which is polarized (O W -H W = 1.084Å) by the H-bond to the surface, onto the oxygen across the trench forming a new bond H S -O S = 1.045Å and a strong H-bond O OH Á Á ÁH S = 1.555Å (Fig. 2e).The OH-group is bound via a single Zn-O OH bond of 1.926 Å in contrast to the two Zn-O OH bonds of the most stable isolated dissociated water molecule (I-D1), which is in a bridging position.Furthermore, the dissociated hydrogen is adsorbed in a different position in the isolated dissociated water molecules (Fig. 2a-c).These differences in the adsorbed structures should be born in mind when comparing the corresponding energy decomposition results.Alternatively, one can compare to the undissociated row of water molecules.The water-water interactions have similar magnitudes with E interaction (W/W) À0.04 eV for the dissociated row, compared to À0.03 eV for the undissociated row.Thus direct water-water interactions contribute very little to the favourable dissociation.The water-surface interaction is very large for the dissociated row: E interaction (ZnO/W) = À4.70 eV.However, this value may be biased by the high energy due to the very long O-H bond in the frozen geometry of the water molecule.The relaxation energy of the dissociated water molecule is 3.00 eV.Combining the water-surface interaction with the water relaxation gives À1.70 eV.The corresponding value for the row of undissociated molecules is À1.25 eV.Thus the water-surface interactions strongly favour dissociation, even when the water relaxation energies are included.On the other hand, the surface relaxation is much larger for the dissociated row with 0.68 eV, compared to 0.27 eV for the undissociated row, counteracting dissociation.However, in contrast to dissociation of an isolated molecule, the energy gain by the enhanced surface interaction is larger than the increase in relaxation energies and dissociation of a row of water molecules is favourable.
Structures
3.2.3Ladder-like quasi-1D aggregates of half-dissociated water dimers.In the same way as isolated water molecules can aggregate forming 1D-chains, the highly stable half-dissociated water dimer can further aggregate on the ZnO surface forming ladder-like quasi-1D structures (Fig. 3b).This new type of aggregate results in a particularly high binding energy of E ads = À1.19 eV per molecule, corresponding to a stabilization of 0.16 eV relative to isolated half-dissociated dimers and 0.21 eV relative to isolated adsorbed molecules.The strong driving force to form ladder-like aggregates is surprising, as the adsorbed dimers are clearly separated due to the large lattice constant of ZnO in the polar direction (5.307 Å).There is no H-bond connecting the dimers and the energy decomposition shows that the water-water interaction is only 0.04 eV stronger than in the isolated dimer (E interaction (W/W) = À0.14 eV vs. À0.10 eV, respectively).The H-bond within each dimer of the ladder is very similar to the one in the isolated dimer (1.679 Å vs. 1.677Å, respectively) and suggests that the increment by À0.04 eV in the water-water interaction energy mainly originates from the lateral interaction between neighbouring dimers.Thus direct water-water interactions (as, e.g., dipole-dipole interactions) result only in a minor contribution to the stabilization.Each water dimer in the ladders is adsorbed on the surface by the same four interactions as in the isolated half-dissociated dimer.On a quantitative level, the two bonds to the zinc atoms, Zn-O W = 2.049 Å and Zn-O OH = 1.960Å are slightly shorter than in the isolated dimer (2.059 Å, and 1.997 Å, respectively).Likewise, the H-bond of the water molecule to the surface oxygen, H W Á Á ÁO S = 1.691Å, and the bond of the dissociated hydrogen to the surface, H S -O S = 1.005Å, are shorter (1.728 Å and 1.031 Å in the isolated dimer, respectively).These shorter bonds indicate a stronger water-surface interaction in line with E interaction (ZnO/W) = À3.67 eV in the ladder, compared to À3.15 eV in the isolated dimer.On the other hand, the relaxation energies of the water molecules are 0.44 eV higher for the ladder (average for the two water molecules E relaxation (W) = 2.12 eV in the ladder compared to 1.68 eV in the isolated dimer).Combining the two terms to circumvent the impact of high-energy frozen water structures indicates a weak attraction of À0.08 eV (E interaction (ZnO/W) + E relaxation (W) = À1.55 eV versus À1.47 eV, respectively).Last but not least, the relaxation energy of the ZnO surface is 0.03 eV smaller in the ladder structures (E relaxation (ZnO) = 0.50 eV vs. 0.53 eV) and thus attractive.Thus the energy decomposition shows that the high stability of the ladder-like water aggregate is mostly due to a subtle interplay of indirect, surface-mediated interactions.Direct water-water interactions contribute only one quarter of the stabilization.While the enhanced water-surface interactions are strongly attractive, they are compensated by increased water relaxation.Finally, the reduced surface relaxation tips the balance in favour of the quasi-1D ladder structure.
3.2.4Monolayer.The water monolayer is the most studied coverage on ZnO(10% 10).Experimental studies have found a (2 Â 1) periodicity using LEED, STM, He-atom scattering and HREELS. 77,80,81Many theoretical studies [43][44][45]78,79,[82][83][84][85] have described the lowest energy structure shown in Fig. 3e that is composed of half-dissociated dimers. It may be uderstood as the next level in the hierarchy of aggregation, where ladders densely cover the surface such that each zinc atom at the surface is coordinated by an adsorbed water molecule or OH-group, leaving no free sites between the ladders.The adsorption energy is À1.18 eV, slightly less than for the ladder structure (À1.19 eV), indicating a weak effective repulsion between adjacent ladders.Thus the formation of the dense monolayer is driven by maximizing the number of strong Zn-O W bonds by saturating all zinc surface sites, rather than by attractive interactions between ladders.On the other hand, the energy decomposition reveals a slight increase in the water-water interaction energies from E interaction This is analogous to the situation in the columns of halfdissociated dimers, where an effective repulsion between adjacent dimers was found in spite of an apparently attractive water-water interaction.
82,83 These tree different monolayer structures can easily interconvert with a barrier of 0.02 eV for going from the molecular the to the fully dissociated (1 Â 1) structure and no barrier for going from the molecular (1 Â 1) to the half-dissociated (2 Â 1) structure. 44,78,84ccording to our phonon calculations, the (1 Â 1) structures have imaginary frequencies and hence are not minima.When the unit cell of the molecular or fully dissociated monolayers is doubled, the structures optimize to the half-dissociated monolayer with (2 Â 1) periodicity.Comparing the energy decompositions for the monolayer structures shows that the waterwater interactions are very similar with E interaction (W/W) = À0.15eV for the molecular and fully dissociated structures and À0.16 eV for the half-dissociated monolayer.This corresponds to only 10% of the preference for the half-dissociated monolayer.Dissociais strongly favoured by the dramatic increase in watersurface interactions from E interaction (ZnO/W) = À1.39 eV in the molecular monolayer to À3.58 eV in the half-dissociated and À4.11 eV in the fully dissociated monolayer.On the other hand, the water relaxation energies also strongly increase from E relaxation (W) = 0.23 eV to 2.08 eV and 2.52 eV, respectively.Likewise, the surface relaxation disfavours dissociation with E relaxation (ZnO) = 0.24 eV, 0.48 eV and 0.67 eV for 0%, 50%, and 100% dissociation, respectively.Thus the half-dissociated monolayer is preferred because it has the best balance between the increase of water-surface interactions, which strongly favour dissociation and the relaxation energies that hinder dissociation.Up to half-dissociation the watersurface interaction dominates, while for full dissociation, the relaxation energies dominate.Fig. 4 shows the adsorption energies of monolayer structures as function of the dissociation degree.It also includes the results for (3 Â 1) structures with 1/3 and 2/3 dissociated molecules (see ESI, † Fig. S4 for details).The symmetric shape suggests that 50% dissociation indeed is the optimum.
In the monolayers, all surface Zn-and O-sites are 4-fold coordinated leading to strongly reduced buckling of the top layer (0.022 Å for molecular, 0.103 Å for dissociated and 0.039 Å for half-dissociated monolayers) in contrast to the clean surface (À0.274Å) and lower aggregates where higher corrugation amplitudes are found.
3.2.5 Point defects in the monolayer.In a domain with monolayer coverage, some molecules may be missing.These missing molecules may be considered as point-defects and can affect the adsorption.To investigate the impact of point-defects on the adsorption energy, a supercell with (2 Â 2) periodicity with missing molecules was used as model for one molecule missing in a large domain.This is a first approximation neglecting coverage effects and 2nd nearest neighbour interactions.Removing a water molecule from the (2 Â 1) half dissociated monolayer (Fig. 3g) costs 1.33 eV.The average binding energy per water molecule is reduced to À1.13 eV.Thus, not only the (average) binding energy of one molecule in the half-dissociated monolayer (À1.18 eV) is lost, but the binding energy of the three remaining molecules is also reduced by 3 Â 0.05 eV.On the other hand, removing a molecule from the less stable c(2 Â 2) half-dissociated monolayer (À1.16 eV) (ESI, † Fig. S5) requires more energy (1.45 eV) as the average binding energy of the remaining molecules is À1.07 eV.Alternative structures, where a dissociated water molecule has been removed, converged to either of these two structures, dissociating one of the adsorbed water molecules during the geometry optimization, since rows of dissociated molecules are very stable.
3.2.6Honeycomb double monolayer.Most previous studies of water adsorption have focused on the low-coverage regime and on the monolayer.Recently, also the interface of zinc oxide with bulk water has been studied. 8,48,85However, the structures appearing in the intermediate regime, which may be of particular importance for ambient conditions 10,12 have not been studied for ZnO surfaces.Intermediate coverages between the monolayer and ice-like films have been observed, e.g., by TDS. 75,76n view of the little prior knowledge about the principles governing the structures of water multilayers and the large number of arrangements for multiple water molecules on the surface, we have used an MD to sample the low-energy structures and optimized snapshots resulting in a total of 21 distinct structures with 4 molecules adsorbed on a (2 Â 1) supercell corresponding to a coverage of 2 ML.The 10 lowest energy configurations are shown in Fig. 3h (2ML1) and in the ESI, † Fig. S6.The most stable structure 2ML1 has a binding energy of À0.93 eV per water molecule and shows a honeycomb like continuous 2D network of hydrogen-bonded 6-rings of water molecules in the relatively flat overlayer.In contrast to the monolayer, the half-dissociated dimer bound on the surface Zn-sites forms H-bonds to the additional water molecules rather than to the surface.The additional water molecules cannot form a Zn-O W bond, since all Zn-sites are already occupied at 1 ML, but H-bond to the surface: the water molecule oriented parallel to the surface plane is H-bound to the dissociated H, while the water molecule oriented perpendicular to the surface plane makes an H-bond to the surface oxygen.Additional H-bonds between the water molecules form the 2D honeycomb network.Flipping rows of H-bonds within this network results in structures with very similar binding energies (within B10-20 meV, see ESI, † Fig. S6).This indicates proton disorder in the honey-comb like H-bonding network of the 2D adsorbate layer.This is analogous to the phenomenon first described by Bernal and Fowler for the 3D structure of ice I h 97 and also observed in many other 3D-crystalline water phases including most ice phases and clathrates.The configurational entropy due to this proton disorder 98 should be taken with care as the separation into isolated molecules is not unique for 2 ML).In the honeycomb double monolayer structures all water molecules and OH-groups are 4-fold coordinated counting surface Zn-sites and surface OH-groups as additional donors and the lone pairs of surface oxygens as acceptor sites.The 4-fold coordination of the additional water molecules may be compared to the Bernal Fowler ice rules. 97,98In contrast to the 2D ice rules of Salmeron 13 that limit the growth of water clusters on noble metals, an infinitely large 2D layer can exist on ZnO because the surface also offers acceptor sites.However, the lack of dangling OH-bonds and lone pairs in the honeycomb double monolayer precludes the further growth of the water film by simply attaching more water molecules.Further growth, forming multilayers, requires restructuring of the 2 ML film to expose the necessary binding sites.][19] In case water molecules are arranged on ZnO(10% 10) in such a way that square 4-rings of H-bonded water are formed in the overlayers (see ESI, † Fig. S6), the binding energy decreases by 40 meV compared to the most stable structure (2ML1) due to increased angle strain.Breaking the connectivity in the rings of the water overlayer decreases the binding energy further (B50 meV) compared to the most stable structure, since the 4-fold coordination is lost and dangling donor and acceptor sites are formed.
In contrast to the ice-like buckled bilayer or flat mixed OH/H 2 O layers with (O3 Â O3)R301 lattice observed on many close-packed hexagonal metal substrates, 17,18,20,24,31 the water molecules that cannot bind to Zn-sites form H-bonds to surface oxygens.The ice-like buckled bilayer has a much weaker interaction with the substrate of 0.1-0.4eV, 14,15,[18][19][20] a coverage of only 2/3 ML, and exposes dangling OH bonds that may allow continuous growth of a multilayer. 17,18,29Furthermore, the ZnO(10% 10) substrate is not hexagonal and half of the Zn-bound water molecules are dissociated.
3.2.7 Multilayers.As the honeycomb double monolayer offers no dangling H-bonds or lone-pairs for further growth of the water layer, it was interesting to study the rearrangements occurring upon addition of more water molecules.Structures with 3 ML coverage were generated in the same way as for 2 ML.The 10 lowest energy configurations of a total of 69 structures obtained by optimization of MD snapshots are characterized in Table 4 and shown in Fig. 3i (3ML1, lowest energy) and in the ESI, † Fig. S7.The binding energies per water molecule of these 10 structures are very similar (À0.77eV to À0.75 eV) although the structures show very different H-bond connectivities.This suggests an amorphous or liquid-like film and considerable configurational entropy that stabilizes this structure with increasing temperature.In all structures a half-dissociated dimer binds to neighbouring Zn surface atoms forming an interfacial contact layer, while the remaining water molecules form a more or less buckled H-bonded layer on top.The water dimers in the contact layer appear in the three binding motifs shown in Fig. 5 that differ in the in arrangement of the water molecule, OH-group and H atom and in the H-bonding network.In motif (a), the hydrogen is transferred to the oxygen neighbouring the intact water molecule, while motif (b) resembles the half-dissociated dimer in the monolayer.In motif (c) the direction of intermolecular H-bonding is reversed with respect to (b) and the OH-group donates the H-bond to the water molecule.As in the case of the monolayer, all ZnO surface sites are 4-fold coordinated and thus saturated.
In few structures an additional water molecule takes part in the H-bonding between the contact layer dimer and the surface (see, e.g., structure 3ML9 in the ESI, † Fig. S7h).However, in general the additional water molecules are attached to H-bond donor and acceptor sites on top of the half-dissociated contact layer and the water molecules are interconnected via a 3D H-bond network.Many different H-bond topologies are observed and consist of H-bonding water molecules forming 4-, 5-, 6-, 7-, 8-and 10-membered rings.While most of the water molecules are 4-fold coordinated in agreement with the Bernal-Fowler ice rules, the highest water molecules are 3-fold coordinated exposing dangling H-bond donor and acceptor sites that can bind additional water layers, allowing a continuous growth of thick multilayers or bulk water on the surface.Indeed, similar interface structures composed of a contact layer with partially dissociated dimers and an H-bonded network of undissociated molecules on top was reported for thicker water layers on the ZnO(10% 10) surface.Two recent DFT-MD studies reported a dissociation degree of ca.55 AE 5% 8,48 in good agreement with our results, while a higher dissociation around 80% was found using the more approximate ReaXX force field. 85A detailed analysis of structures in the contact layer revealed dynamic This journal is © the Owner Societies 2017 proton transfers between binding motifs (a) and (b) as well as (b) and (c) with free energy barriers of B100 meV and B70 meV, respectively. 48Furthermore, a B16% increased water density at the contact layer was noted due to excess water molecules H-bonding to surface oxygen atoms, 48 similar to the situation in one of our 3 ML structures (3ML9).
As shown in Table 4, the strongest interaction energies with the surface (À1.33 eV to À1.40 eV) are observed in structures 3ML1, 3ML3 and 3ML4 in which binding motif (a) is present.When the water binds via the binding motif (b), the water/ surface interaction energy is lowered (À1.26 eV to À1.27 eV) as observed in structures 3ML2 and 3ML9.In structures 3ML5, 3ML6, 3ML7, 3ML8 and 3ML10 in which the water dimer in the contact layer binds via motif (c), the interaction energy is weaker (À1.13 eV to À1.15 eV).This may be related to the water relaxation energies and the distances between the OH-group and dissociated hydrogen, which are large for motif (a) and small for motif (c).The strength of the average water-water interaction energy does not depend on the amount of H-bond rings nor on the size n of the rings formed, but rather depends considerably on the binding motif in the contact layer.The strongest water-water interaction energies (À0.51 eV to À0.59 eV) are found in structures with binding motifs (a) and (b), while configurations with binding motif (c) in the contact layer have lower values (À0.45 eV to À0.47 eV).On the other hand, the relaxation energy of the ZnO substrate has roughly the same value for all structures with 0.16 eV per water molecule or 0.49 eV per surface unit cell, which is similar to the half-dissociated monolayer (0.48 eV).
Binding energy trends
The previous section has shown that several classes of aggregates can be stable on the ZnO(10% 10) surface.In all structures, the strong interaction of water with the surface Zn-sites enforces an epitactic arrangement of the water molecules in the contact layers.In the low coverage regime (0-1 ML) the adsorption energy of each class of aggregates shows little dependence on the coverage (Fig. 6).Formation of the monolayer may be envisioned as a stepwise process passing through a hierarchy of aggregates from isolated molecules, via dimers that partially dissociate and ladders that have the highest binding energy, À1.19 eV per water molecule.Comparing the adsorption energies of isolated molecules with columns, rows and monolayers reveals a near additivity of the lateral interactions in the two directions.For molecular adsorption the increase in the binding energy relative to an isolated molecule is À0.03 eV for columns, À0.04 eV for rows and À0.09 eV for the monolayer (Table 1).Likewise, for half-dissociated adsorption the increments in binding energy relative to the isolated dimer are +0.02eV for columns, À0.16 eV for the rows and À0.15 eV for the monolayer (Table 3).The direct water-water interactions based on the energy decomposition show an exact additivity of the interactions in rows and columns.The additivity may be related to structural similarities.For fully dissociated water the adsorption contributes to E ads .structure of the isolated molecule is unique with a bridging OH-group in contrast to single Zn-O OH bonds in the columns, rows and monolayer and the additivity of the adsorption energy increments does not hold.
Increasing the coverage beyond 1 ML decreases the binding energy because all under coordinated surface zinc sites are occupied and the additional water molecules can only bind via hydrogen bonds.At 2 ML, the most stable adsorption configuration has a binding energy of À0.93 eV.All water molecules are tightly bound with no dangling O-H that could bind more water molecules.At 3 ML, the additional water molecules do not H-bond to the surface oxygens but are H-bonding to the contact layer.The water forms distinct H-bonded overlayers and many structures with very similar binding energies (À0.77 to À0.75 eV for the ten best adsorption configurations).For higher coverages, the binding energy is expected to approach the cohesive energy of ice (À0.55 eV), the limiting value for very thick layers of adsorbed water molecules at 0 K.These trends in the calculated binding energies agree at least qualitatively with the TDS data reported for the condensation of water on the ZnO(10% 10) surface. 75,76At low exposure a desorption peak at 340 K was reported that saturated and was assigned to binding on the Zn-site.The fact that this peak does not shift with coverage agrees with our finding that ladders and the half-dissociated monolayer have nearly identical adsorption energies.At higher exposures several peaks with lower desorption temperatures (220-152 K) were observed and assigned to H-bonded molecules in clusters, on oxygen sites, 2D ice and 3D ice.A water binding energy of 1.02 eV corresponding to a desorption peak at 367 K observed with He-scattering was estimated using Redhead analysis. 77This agrees reasonably well with our adsorption energy for the monolayer (À1.18 eV).
The driving forces for water aggregation may be derived by comparing structures with identical dissociation degree and similar structures to avoid superposition by other effects.An overview of the energetic changes in such aggregation processes (ESI, † Fig. S8) reveals that direct water-water interactions (E interaction (W/W) ) are always in favour of forming larger aggregates.Likewise, the surface relaxation (E relaxation (ZnO) ) generally is smaller for larger aggregates and hence favours aggregation.This contrasts with the very recent report that on the rutile TiO 2 (110) surface relaxations due to adsorbed methanol or water reduce the adsorption energy on neighbouring sites and hence have a repulsive effect. 99The changes in water-surface interaction energy (E interaction (ZnO/W) ) and water relaxation (E relaxation (W) ) depend on the size and type of aggregate.In several cases weakening of the water-surface interactions is more important than the gain due to the other contributions, rendering aggregation unfavourable in this specific case.Thus, the driving force for aggregation of water molecules on the surface is due to a subtle interplay of direct water-water interactions and interactions mediated by surface-adsorption and due to geometry changes.
The driving forces for water dissociation may be analysed by comparing the changes in the energy decomposition terms in similar aggregates (ESI, † Fig. S9).As dissociation or partial dissociation is favourable only for certain aggregates, such as rows, the dimer and the monolayer, it was surprising to find that direct water-water interactions hardly contribute to water dissociation.E interaction (W/W) changes by less than AE0.02 eV.The driving force for dissociation is due to a very strong increase in water-surface interactions (E interaction (ZnO/W) ), which can reach several eV.On the other hand, the relaxation energies of water and the surface strongly oppose dissociation.In most cases, the combined water and surface relaxation energies required for dissociation are larger than the gain due to enhanced surface interactions.Dissociation is favourable only if the energy gain due to increased water-surface interactions is larger than the energy required for the geometrical changes.
Electronic structure and binding mechanisms
Fig. 7a compares the density of states (DOS) distributions of bulk ZnO, the clean surface and two water covered ZnO(10% 10) surfaces considering both molecular and dissociative adsorption modes.For simplicity (1 Â 1) supercells are used to represent the two adsorption modes of water (Fig. 1f and 2f).Hybridization of the Zn 3d and O 2p bands in the valence band (which is overestimated in GGA-DFT) constitute the main signature of the bulk density of states in the range 0.0 to À5.8 eV, besides a sharp peak for the O 2s band at À16.9 eV.The Zn 3d and O 2p bands are slightly more broadened in the projected density of states (PDOS) of the top Zn and O atoms of the clean surface as compared to the bulk PDOS.Near the valence band maximum there is a strong double peak of the oxygen PDOS, which probably corresponds to the dangling bond and has a tail reaching +0.2 eV.Furthermore, the O 2s peak is shifted 0.5 eV higher due to the cleaved Zn-O bond.On the other hand, the PDOS of the surface zinc now has strong contributions near À6 eV.After absorption of molecular water, the PDOS of the surface zinc and oxygen atoms look very similar to the bulk DOS in agreement with the bulk-like tetrahedral coordination and geometry (Section 3.2.4).This shows that water adsorption efficiently passivates the dangling bonds and thus reverses the shifts characteristic for the clean surface.The O 2s band is back at 16.9 eV and the band due to hybridization of Zn 3d and O 2p has the same dispersion and very similar profile as in the bulk.The water orbitals corresponding to the O-H bonds (at À19.4 eV and À7.2 eV in gas-phase water) are shifted to À20.0 eV and À7.9 eV due to adsorption on ZnO, while the water lone pairs (at À3.4 eV and À1.3 eV in gas phase water) hybridize with Zn 3d orbitals resulting in a wide band at À5.9 to À0.1 eV (exaggerated due to the high Zn 3d states).Due to the proton transfer that leads to dissociative adsorption, the O 2s bands of water and ZnO shift in opposite directions by +2.5 eV and À1.7 eV, respectively.The surface O 2p states contribute to binding of the transferred proton with a new band at À6.7 eV in the PDOS of the surface oxygen, while the water O 2p states are now included in the broad binding state at À5.8 to À0.1 eV.The opposite directions (and comparable magnitudes) of the shifts in the PDOS of the surface and water oxygen atoms due to the proton transfer agree with the fact that both adsorption modes have comparable binding energies.the order of magnitude of the difference densities seem to be comparable, which indicates that the electron densities of the frozen water layers are not too different from those in the corresponding complete structures.This suggests that the assumptions underlying the energy decomposition are valid for the monolayer structures.
OH stretching modes
In Table 5, the OH stretching frequencies are reported for gas phase water, various structures at different coverages and bulk ice comparing DFT with high-level theory and experimental data.The vibrational frequencies of the free water molecule are underestimated by B100 cm À1 compared to the high level CCSD(T) calculations and experimental harmonic frequencies, 100 due to the systematic errors in DFT.On the other hand, the harmonic DFT frequencies are B50 cm À1 higher than the experimental anharmonic frequencies. 101In each of the adsorbate structures, only one frequency is not shifted to below 3700 cm À1 .This is probably the dangling OH-bond or a weakly H-bonding OH group.The frequencies shifted to lower values indicate the presence of H-bonds among the water molecules and between the water molecules and the surface.We note the strong similarity of the calculated IR frequencies of the half-dissociated dimer, the ladder and the monolayer, which all contain same the dimer binding motif.They all have one frequency around 3750 cm À1 , which corresponds to the dissociated OH-group that does not form an H-bond.Furthermore, there are two frequencies at ca. 3000 and 3100 cm À1 and one very strongly shifted frequency that shifts from 2523 cm À1 in the isolated dimer to ca. 2930 cm À1 in the ladders and monolayer.Our calculated frequencies for the monolayer are in agreement with previous calculations 102 and agree reasonably well with HREELS data determined for monolayer coverage, 81 with the highest frequency overestimated by B50 cm À1 as expected.The values reported for 2 ML and 3 ML are predictions that may help to experimentally identify the honeycomb 2 ML structure and thicker multilayers.The number of shifted frequencies, as well as the magnitude of the shift to lower wavenumbers, considerably increases with the coverage.Based on the correlation of frequency shift and H-bond strength described in, 103 this illustrates the strengthening of the H-bond network.At 2 ML and 3 ML, nearly all OH frequencies are included within the calculated range of OH frequencies in ice (2709-3789), which indicates an extended H-bonded network.
Conclusions
Our systematic investigation of water adsorption on the ZnO(10% 10) surface from low coverage to 3 ML revealed several important new structures including ladder-like rows of half-dissociated dimers that hold the record in adsorption energy with À1.19 eV per water molecule and at 2 ML an novel honeycomb double monolayer.The latter structure is composed of Zn-bound half-dissociated dimers and additional water molecules H-bonded to surface oxygens and surface OH-groups.The water molecules form a H-bonded network of 6-memberd rings, which is proton disordered -a 2D analogon of the disorder in the 3D structures of ice phases and clathrates.The configurational entropy due to this proton disorder will contribute to the stability of this adsorbate phase at finite temperatures.All water molecules and the OH group are 4-fold coordinated.Due to the absence of free OH-groups and lone-pairs, further growth of the water layer requires a major restructuring of the interface.This is illustrated by the 3 ML structures that are attached to the ZnO surface via a contact layer composed of three different binding motifs of Zn-bound halfdissociated dimers.The additional water molecules form an H-bonded network of 4-to 10-membered rings that is H-bonding to the contact layer and only in few exceptions to surfaceoxygens.The large number of rather different structures with very similar energies suggests an amorphous or liquid phase and significant configurational entropy.The availability of dangling H-bond donor and acceptor sites allows further growth of this layer.In the present study small (2 Â 1) unit cells have been used to gain a first insight into possible structures and their stability at this intermediate coverage regime, which had not been studied with DFT previously, although TDS experiments indicate such adsorbate phases.In spite of the small unit cells, the similarity of our results for the 3 ML film with the main features at the interface of thicker water layers with the ZnO(10% 10) surface described in the literature 8,48 indicates that a minimal model with a (2 Â 1) supercell and 6 water molecules already captures the key features of the interface of ZnO with bulk water.The water reorientation dynamics, proton hopping dynamics, diffusivity and the entropic stabilization of water films with 2 ML and 3 ML coverage will be subject of a future larger-scale MD study.
At low coverage, a hierarchy of aggregation states was revealed.Two adsorbed molecules may form a dimer with an intermolecular H-bond.This dimerization activates the dissociation of the H-bond acceptor molecule, which is exothermic by À0.02 eV in contrast to dissociation of an isolated adsorbed molecule that is endothermic by 0.09 eV.The half-dissociated dimers may further aggregate forming ladder-like rows that have a higher adsorption energy than the monolayer, which is also composed of half-dissociated dimers.This indicates a weak lateral repulsion between ladders.Therefore, water monolayers form on the ZnO(10% 10) surface because this allows maximizing the number of strong Zn-O W bonds by saturating all surface Zn atoms and completing their 4-fold coordination, not because of attractive lateral interactions.At coverages below 1 ML, ladders separated by one or more empty rows are predicted to be slightly more stable than domains with full monolayer coverage and large empty areas.
Water aggregation on ZnO is controlled by a subtle interplay of direct water-water interactions including H-bonds and dipoledipole interactions versus surface-or adsorption-mediated interactions including enhanced (or reduced) water-surface interactions and relaxation energies required to optimize the geometry of the water molecules and ZnO surface for adsorption.For all cases studied, the direct water-water interaction energies, E interaction (W/W) , favour formation of larger aggregates and the contributions in column and row directions add up in the monolayer.The surface relaxation energies, E relaxation (ZnO) , also generally contribute towards aggregation or do not change.On the other hand, the sign and magnitude of the changes in water-surface interaction energies, E interaction (ZnO/W) , and water relaxation energies, E relaxation (W) , depend on the type of aggregates formed.For example, the sign of these two contributions may change from rows to columns for molecular adsorption.Furthermore, their magnitudes are much higher for dissociated water molecules.The final outcome, whether aggregation is favourable or not, depends on a subtle balance in the changes of all four terms.
Water dissociation on the ZnO(10% 10) surface also sensitively depends on the type of aggregate.While dissociation is unfavourable for isolated molecules, 100% dissociation is favourable for rows and 50% dissociation is preferred for dimers, ladders and the monolayer.Columns are a border line case, where molecular adsorption and half-dissociation leads to very similar energies.Furthermore, at 2 ML and 3 ML coverage every second water molecule bound to a surface Zn-site is dissociated.The preference for 50% dissociation of water molecules adsorbed on Znatoms is due to half-dissociated dimers, which appear as common motif in the corresponding structures.While the degree of water dissociation clearly depends on the type of aggregates, the binding energy decomposition reveals that direct water-water interactions (E interaction (W/W) ) change by only AE0.02 eV or less when similar aggregates with different dissociation degree are compared.The changes in water-surface interactions are about two orders of magnitude larger and always strongly favour dissociation.On the other hand, the relaxation energies of the water molecules and the surface strongly increase upon dissociation and hence counteract dissociation with a contribution of similar magnitude.Therefore, the energetics of water dissociation on ZnO is determined by a subtle balance of strongly enhanced water-surface interactions versus increased relaxation energies.Thus the different behaviour of the various aggregates results from indirect, surface-mediated interactions.
For many substrates a relation of water dissociation and water-surface interaction strength has been pointed out.On the structurally related GaN(10% 10) surface higher dissociation degrees have been reported than on ZnO. 8,46,47The increased dissociation is also reported for the GaN/ZnO alloy surface. 89][40][41][42] Parallel to the increase in dissociation in the series ZnO, GaN, Si, the adsorption energies for molecular adsorption decrease from À1.07 eV to À0.74 eV and À0.36 eV, while they increase for dissociative adsorption: À1.07 eV, À2.18 eV, À2.47 eV, respectively. 41,46This agrees with our conclusion that enhanced water-surface interactions are the driving force for water dissociation.Analogous trends in the adsorption energies for molecular versus dissociative adsorption were observed for isolated water molecules on the more ionic alkaline earth oxides MgO, CaO, SrO and BaO. 32The stability of the dissociated state increases with the lattice constant and the flexibility of the substrate towards relaxation.The latter factor reduces the surface relaxation energy required to bind dissociated water, which opposes dissociation.On metal surfaces, the stability of mixed OH/H 2 O layers depends mainly on the OH-metal bonding and not on H-bonding. 24 ZnO) and E (H 2 O) are the total energies of the relaxed slab with n H 2 O adsorbed water molecules, the relaxed clean slab and a water molecule computed in the gas phase (optimized in a 19 Å Â 19 Å Â 19 Å unit cell).
leading to a very negative water-surface interaction energy E interaction (ZnO/W) and a very high relaxation energy E relaxation (W) ads = À0.19 eV), the OH group binds via only one strong covalent bond to a surface Zn (Zn-O OH = 1.873Å) and a strong H-bond (H W Á Á ÁO S = 1.517Å and f = 141.51)with the nearest surface oxygen across the trench.The distances between the dissociated hydrogen atom and the OH-group are quite large with 2.861 Å in I-D1, 3.125 Å in I-D2 and 3.146 Å in I-D3.This results in very high water relaxation energies: E relaxation (W) = 5.48 eV, 5.38 eV and 5.35 eV, respectively.As discussed in the Methods section, the high energies of the frozen water molecules due to the stretched bonds also contribute to the water-surface interaction energies calculated via eqn (2) with E interaction (ZnO/W)
Fig. 1
Fig. 1 Binding energies, top, front and side views of molecularly adsorbed water on the ZnO(10 % 10) surface: (a and b) isolated molecule I-M2 and I-M1, (c) dimer I-MM, (d) column C-M, (e) row R-M and (f) monolayer 1ML-M.Zinc atoms grey, oxygen atoms of ZnO red, oxygen atoms of water molecules blue, hydrogen atoms white.
Fig. 2
Fig. 2 Binding energies, top, front and side views of dissociated water adsorbed on the ZnO(10 % 10) surface: (a-c) isolated molecule I-D1, I-D2 and I-D3, (d) column C-D, (e) row R-D and (f) monolayer 1ML-D.Zinc atoms grey, oxygen atoms of ZnO red, oxygen atoms of water molecules blue, hydrogen atoms white.
aTable 2
Energies in (eV) distances in (Å), angles in (1).b Bond length of the surface Zn-O dimer (2.012 Å in bulk and 1.872 Å in the clean surface).c Bond length between zinc atom and the water oxygen.d Length of H-bond between water and surface oxygen.e Length of water O-H bond forming a hydrogen bond to the surface.f H-bond between water molecules.g Length of free water O-H bond (0.972 Å in gas phase).h Height difference between top surface layer and corresponding bulk layer centre of mass (À0.274Å in the clean surface).i Angle of hydrogen bond O W -H W Á Á ÁO S .Binding energies, their decomposition and selected geometrical parameters of dissociated water adsorbed as isolated molecule (I-D3, I-D2 and I-D1), column (C-D), row (R-D) and monolayer (1ML-D) on the ZnO
a
Energies in (eV) distances in (Å), angles in (1).b Bond length of the surface Zn-O dimer (2.012 Å in bulk and 1.872 Å in the clean surface).c Length of the bond between zinc and the OH group oxygen.d Distance between the dissociated hydrogen atom and the OH group oxygen.e Bond length between dissociated hydrogen and surface oxygen.f Bond length of OH-group (0.972 Å in gas phase).g Height difference between top surface layer and corresponding bulk layer centre of mass (À0.274Å in the clean surface).h Angle O OH Á Á ÁH S -O S of the H-bond.
Fig. 3
Fig. 3 Top, front and side views of partially dissociated water aggregates on the ZnO(10 % 10) surface: (a) isolated half-dissociated dimer I-MD, (b) ladderlike row of half-dissociated dimers R-MD, (c) ladder-like row of trimers with MMD sequence R-MMD, (d) half-dissociated columns C-MD, (e) halfdissociated monolayer 1ML-MD, (f) monolayer with MDD sequence 1ML-MDD, (g) monolayer with point-defect P-MD_D, (h and i) most stable structures with 2 ML and 3 ML of water, respectively.Zinc atoms grey, oxygen atoms of ZnO red, oxygen atoms of water molecules blue, hydrogen atoms white.For better visualization, the oxygen atoms of water and OH-groups bound to Zn are dark blue, while the oxygen atoms of additional H-bonded molecules (2 ML) or of molecules in the outermost layer (3 ML) are sky-blue.
Fig. 4
Fig. 4 Binding energy of monolayer structures as function of dissociation degree.
will contribute to the stability of this adsorbate phase at finite temperatures.As for the monolayer, all ZnO surface atoms are saturated with a nearly tetrahedral 4-fold coordination: Zn surface atoms are forming a bond to either an OH-group or a water molecule (1.967 Å and 2.037 Å, respectively), while the surface oxygens either bind the dissociated hydrogen atom (H S -O S = 1.027Å) or accept an H-bond from a water molecule.The water-surface interaction energy is À3.82 eV per surface unit cell ¶ and hence stronger than in the monolayer (À3.58 eV).On the other hand, the relaxation energies of the surfaces are comparable with À0.44 eV per surface unit cell ¶ for the honeycomb double monolayer and À0.48 eV for the monolayer.The H-bonds in the honeycomb network range from 1.661 Å to 2.261 Å and the water-water interactions are strong, À0.66 eV per water molecule (E interaction (W/W) and E relaxation W i
Fig. 5
Fig. 5 Binding motifs in the contact layer at the ZnO(10 % 10)/3 ML interface.The additional water molecules have been removed for clarity.
Fig. 6
Fig. 6 Binding energy as function of coverage for various aggregation classes.
Fig. 7b and c show difference electron densities for molecular and dissociative adsorption, respectively.They were calculated by subtracting the electron density of the slab and adsorbate, calculated separately in the frozen geometry, from the electron density of the optimized structures.Molecular adsorption (Fig. 7b) leads to polarization of the water molecule shifting electron density from the O-H bond forming the H-bond and the top of the molecule toward the centre of the H-bond, the centre of the Zn-O bond forming, and to the region of the remaining lone-pair of the water molecule.Polarization of the surface zinc and oxygen atoms also contributes to the charge build-up in the new bonds formed.Dissociative adsorption (Fig. 7c) results in even stronger polarization of the strongly distorted water molecule, shifting electrons from the centre of the O W Á Á ÁH S H-bond (in the calculation of the frozen water molecule this is a long O-H bond) towards the new O S -H S and Zn-O W bonds and the region of lone pairs of the OH-group.The surface (and to lesser extent the sub-surface) zinc and oxygen atoms are also polarized shifting electron density towards the bonds formed.In spite of the stronger electron redistribution in the dissociated structure,
Fig. 7
Fig. 7 (a) PDOS for bulk ZnO, clean surface, molecular and dissociative adsorption and a gas phase water molecule.Valence band maxima and conduction band minima are indicated.Difference electron density for (b) molecular and (c) dissociative adsorption.Iso-surfaces are drawn at the +0.003 (yellow) and À0.003 (cyan) e Å À3 density levels.
Table 4
Binding motif of the water dimer in the contact layer, H-bond topology and contributions to the binding energy for the ten most stable structures with 3 ML Binding motif in the contact layer.b Ring size(s) of the H-bond network.c W 1 (W 2 ) is the dissociated (undissociated) water molecule adsorbed on a surface Zn-site.d Additional water molecules in the second layer not bound to a surface Zn-site.e Additional water molecules in the third layer not bound to a surface Zn-site.f The average of the water relaxation energies E relaxation a
|
2017-10-30T22:06:10.046Z
|
2017-01-04T00:00:00.000
|
{
"year": 2017,
"sha1": "a746c0dd328a0d92d65614ca9ae27c7659eafcfb",
"oa_license": "CCBYNC",
"oa_url": "https://pubs.rsc.org/en/content/articlepdf/2017/cp/c6cp07516a",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "88947e0aa6d4f3d6d4bef04ad7fba22c47dda811",
"s2fieldsofstudy": [
"Chemistry",
"Materials Science"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
}
|
140030846
|
pes2o/s2orc
|
v3-fos-license
|
Graphene-based tunability of chiral metasurface in terahertz frequency range
This paper is devoted to bi-layer chiral metasurface with graphene half-petals. Numerical simulation in CST Microwave Studio confirms that changing of chemical potential of graphene leads to changing in transmission coefficients for this metasurface. Due to chirality, these coefficients are different for right-handed and left-handed circularly polarized wave. Thus, such metasurface can be used as tunable polarization-converter.
Introduction
Recent decades THz frequency range has become very popular in scientific society due to its unique properties and applications in space exploration, tomography [1] and biomedicine [2][3][4], etc. Despite to this fact, a deficit of passive components still exists for terahertz frequency range. The solution of the problem could be found in development of metamaterials with different functionalities. Metamaterials or artificial effective media are consisted of an array of unit cells and show different effects that cannot be found in nature, for example, negative refraction index. Such phenomenon had been predicted theoretically by V. Veselago in 1967 [5], although it was confirmed experimentally many years later by D. Smith and others in 1999 [6]. This pioneer work initiated a big increase in research on metamaterials. Metamaterials have different applications, such as tunable reflectors, switchers, filters, and perfect absorbers [7][8][9] and especially they are used in polarization components. Chiral planar metamaterials, or chiral metasurfaces, show negative refractive index, circular dichroism, etc. These unique properties allow applying chiral metamaterials in polarization optics. In this work we propose a tunable polarization converter -a bi-layer chiral metasurface that is composed of conjugated gammadion resonators with graphene inclusions.
The metasurface under the study
The unit cell of the investigated metasurface [10][11] is shown in Fig. 1. The geometrical parameters are following: the side of the unit cell a=600 µm, a width of the planar gammadion petal w=10 µm (its inner radius Rmin=140 µm), the silicone substrate thickness is 45 µm and permittivity ε=11.34. The back side resonator is made entirely of perfect electric conductor (PEC), while the front resonator halfpart has been replaced by graphene. Two variations of the metasurface were studied. In the first case the center of the top resonator was made of PEC and the edges of the gammadion petals were made of graphene (as shown in Fig. 1-a). In other case the metallic and graphene parts were reversed.
The numerical simulation and polarizing properties calculation approach
The transmission simulations for linearly polarized waves were performed in frequency domain using CST Microwave Studio based on Finite Elements Method. For each design the unit cell was translated along x and y axes directions. A scheme of the numerical simulation can be found in Fig. 3. To calculate the polarization properties of the metasurface one needs to evaluate the transmission spectra for circularly polarized waves. These spectra can be simply calculated from the simulated coand cross-polarization transmission coefficients Txx and Txy respectively by using Jones calculus approach [12] for the structures with fourfold (C4) rotational symmetry: (1) where T++ refers to the transmission amplitudes for right-handed circularly polarized waves, T--to the left-handed ones respectively. Having the transmission amplitudes for circularly polarized waves calculated, the ellipticity angle was found using the next formula:
Results
The ellipticity angle spectra were calculated for six values of graphene chemical potential for both metasurface types in the frequency range of 0.1 -0.2 THz. The results shown in Fig. 2 represent the strong dependence of polarizing properties on chemical potential of graphene. The polarization state of the transmitted wave can be changed by varying of the chemical potential of graphene from 0 to 0.5 eV. As we can see, this dependence is almost the same for both structures around the frequency of 0.145 THz, but quite different for the resonance at 0.178 THz: for the original metasurface design maximal value of the ellipticity angle is | max|=38 degrees, for the second design | max|=43 degrees. This effect is caused by replacing parts of graphene by PEC on the front side resonator. Due to the fact that the material of the back side resonator was not changed, we can suppose that the ellipticity extreme at 0.145 was caused by the resonances from this resonator. The second extreme in this frequency range may depend on the resonance from the hybrid front resonator.
Conclusions
In summary, the influence of graphene includings on chiral metasurface polarizing properties has been studied. It was found that the ellipticity angle depends on the Fermi level of the graphene includings. The noticeable changes of ellipticity provide a possible wide usage of the metasurface as a tunable polarization converter in many applications, for example, terahertz polarimetry, terahertz time-domain spectroscopy, etc.
|
2019-04-30T13:08:52.049Z
|
2018-12-01T00:00:00.000
|
{
"year": 2018,
"sha1": "8be7542dadc27c6a04d124cab817697b7be85612",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/1124/5/051053",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "dbd083e9ec17a8384a484dfba821e03138c6f800",
"s2fieldsofstudy": [],
"extfieldsofstudy": [
"Materials Science"
]
}
|
262712550
|
pes2o/s2orc
|
v3-fos-license
|
Mechanical evaluation of sustainable concrete production from porcelain waste
Concrete has witnessed a great development in recent years, representing the production of reliable types of concrete to withstand the applied loads, high durability, and ease of implementation, as the preparation of concrete has become from remnants (sustainable materials) due to its availability and cheapness in addition to ridding the environment of these materials. This research demonstrates the use of porcelain tailings as a fractional substitute for coarse aggregate, Studying effect adds partial replacement on mechanical properties such as flexural testing, tensile strength, and wet density and then compares them with the reference mixture. In this research, the percentage of replacement of coarse aggregate with porcelain waste from weight is 25%, 35%, 50%, and maintaining the ratio of water to cement at 40% it observes that produces concrete with high tensile and flexural strength at replacement percent 50%.
Introduction
Many researchers seek to produce concrete with good specifications by using materials that may be added or as a partial replacement for one of the components of the guard to obtain high-quality concrete and achieve an economic level.The focus of this research was on the use of porcelain residues resulting from its use as a finishing material in buildings, such as covering facades and floors, because of its beautiful nature, due to its durability, and ease of cleaning.When using porcelain, it results in waste resulting from cutting the porcelain to suit the dimensions of some pieces and angles to prevent a build-up of porcelain in the environment, it can be used as a substitute for materials in concrete.
The aim of the study
x The main objective is to use porcelain waste in concrete and finds the optimal proportion of porcelain addition as an alternative to coarse aggregate and compare it with the reference mixture.x Evaluation of wet density, tensile and flexural strength by adding the sustainable material.
Area of the study
x The goal of this project is to make sustainable concrete from porcelain tailing as a substitute for coarse aggregate and study the effect of addition on concrete mechanical features.
x The percentage of porcelain waste with replacement rates of 25% and 50% as a replacement for coarse aggregate, and compared with the reference.
Literature review
Brito and et al [1] studied the mechanical behavior of non-structural concrete made with recycled ceramic aggregates for the creation of 50 mm thick pavement slabs was conducted.specific density ceramic aggregate cement as well as mechanical properties like compressive strength, flexural strength, and abrasion resistance were evaluated in several fresh concrete tests.The waste ceramic utilized in the study was acquired from a Portuguese factory where it was crushed to form 50 mm slabs of thick pavement in place of the customary coarse aggregate.The study looked at four distinct concrete compositions with various proportions of replacement common aggregate, including 0, 0.33, 0.66, and 1 by mass.Denesh and Gunaseelan [2] employed 25%, 50%, and 75% Using porcelain tile instead of coarse aggregate and compared the results of M30 grade porcelain tile concrete's qualities to those of regular concrete while the concrete was both fresh and cured.Compressive strength could be raised by as much as 29.09% by replacing coarse aggregate with porcelain waste which is 50% more than usual.Waste porcelain tile was found to increase the concrete's rupture modulus by up to 23.5 percent when used to replace coarse particles.
Ndambuki and et al [3] granite of maximum particle size of 12.5 mm was used as a partial substitute for two types of aggregates, both fine and coarse, with percentages of 25%, 50%, 75%, and 100%, as well as its impact on the properties of fresh and hardened concrete was studied.The maximum flexural strength and split tensile strength were obtained by individually substituting coarse aggregate and fine aggregate for 100% of the natural aggregate.As the replacement fraction of aggregates from nature grew, the mechanical characteristics of CWA concrete improved.Sathya and et al., [4] focuses on the experimental investigation of concrete strength and the best replacement percentage for cement in M25 grade concrete using ceramic waste which is produced at end of polishing and finishing of ceramic tiles at 0%, 10%, 20%, and 30% replacement rates.Several concrete mixtures were created, and tests for compressive, tensile, and flexural strength were conducted.The outcomes were contrasted with typical concrete.As a result, the strength increased by up to 20% when the ceramic powder was used in place of cement in concrete.Another experimental examination of cement mortar strength and the ideal replacement percentage using ceramic waste in the ratio of 0%, 10%, and 20% is also covered in this paper.Mortar combinations were created, tested, and compared to the standard.
Mohammed [5] tested properties of concrete are affected by utilizing crushed porcelain as a partial substitute for fine aggregate in concrete.porcelain crushed filler was utilized in this study.The weight of the fine aggregate is replaced to a percentage of (0, 10, 20, 30, 40).The results demonstrate that the use of porcelain crushed filler has had an impact on the properties of concrete, causing a reduction in concrete density of up to 6.07% at a replacement percentage of 40% and reducing its water capacity.At the same percentage, absorption decreased to 17%.Findings also show that the porcelain crushed filler has a positive effect on compressive and tensile strengths, with an increase of up to 18% at a percentage of replacement of 20% when compared to normal concrete.
Perera and Kobbekaduwa [6] tested Porcelain Waste Fine Aggregate (PWFA), a low water absorbing material, was used to replace conventional fine aggregates in the concrete of Grade 30 in the proportions of 25%, 50%, 75%, 85%, and 100%.The 75% mix was found to be the most suitable and cost-effective replacement proportion of PWFA, with a 28-day compressive strength of 54.31 MPa, which is 50% greater than the compressive strength of the control mixture.Because of its higher strength, the 75% PWFA Grade 30 mix can be used as Grade 45 concrete, saving up to 10% of the cost.
Materials
Materials used in concrete
Coarse aggregate
In this research, crushed gravel from the Al-Nabai quarry was used, with a maximum size of 20 mm.coarse conforms to the specification IQS 45 [7].Aggregate that is used shown in Figure 1.
Porcelain tile
Waste porcelain is widely available, particularly in countries of the Middle East.Some Arab nations, like Iraq, rely only on the importation of porcelain in huge quantities to meet consumer demand [8] Porcelain waste resulting from building finishes used in flooring or wall cladding was used, with a maximum size of 19 mm as illustrated in 'fig.2'porcelain tile and confirm to specification IQS 45 [7].Table 1 shows the chemical analysis of the used porcelain.
Fine aggregate
Sand is a phrase used in the construction sector. a term used to describe a fine-grained material with particles less than 5 mm.The whole building sector uses sand.It is a crucial raw material for providing infrastructure and houses all around the world.figure 3 shows the sand used in the mixture Standards demand sand.possessing qualities as an example limited particle distribution, inertness, density, hardness, water absorption limit, metal type, endurance, and absence of toxic chemicals Laboratory-treated fine aggregate from Najaf quarries was used with a maximum size of 4.75mm is confirmed IQS 45 Zone 2 [7].Sand used in the mixtures is shown in Fig. 3. Table 2 explains chemical and physical properties of sand used in mixes.
Tests of concrete 2.3.1 Wet density
The density of fresh concrete can be found by calculating the weight of concrete compacted in a container of known dimensions to the volume of the container according to the specification.BS EN 12350-6 [13].
Tensile strength (MPa)
an indirect tensile strength test for concrete was conducted for an average of three models for each proportion with dimensions of the tube (100 * 200) mm for 28 days ages and according to the specification BS EN 12390-6 [14].Examination is shown in Figure 4. 6, decrease in the density of concrete upon replacement results in lighter weight concrete, and therefore porcelain has a heavier weight compared to gravel in the traditional mixture.Porcelain aggregate has a lower density, which is advantageous in this country where soil-bearing capacity is low in most construction sites Saleh [16]
Tensile strength
The addition of porcelain waste as an alternative to coarse aggregate led to an increase in tensile strength DENESH [2], by 8%, 20%, when adding 25%, 35% respectively, and by an increase of 26% when adding 50%, as shown in Table 7. 6, shows the results of the bending test, where the increase rate was 45% when the replacement rate was 50%.reason for improving the resistance to flexural strength is due to the bonding strength provided by the porcelain when it is broken, as it produces when broken, plates with elongation and edges that ensure the overlap between the components of the mixture Qasim and et al., [17] Figure 6.demonstrates the flexural strength results for concrete mixes at 28 days of age
Conclusions
After calculating the fresh density of concrete, we notice a lower weight concrete with a continuous increase in the percentage of porcelain, which results in lighter weight concrete compared to the reference mixture.
x Increase in tensile and bending resistance values, as the highest values were recorded at a 50% replacement ratio compared to the reference mixture.x Through the results obtained, it is possible to produce lightweight concrete using sustainable materials, to reduce accumulation of these materials in the environment, and to obtain concrete with flexural and tensile strength higher than the traditional one at 50%.
Figure 4 .
Figure 4. concrete sample under test tensile strength
Figure 5 .
Figure 5. Concrete sample under test flexural strength
Table 1 .
Chemical analysis for porcelain
Table 2 .
[11]ical and physical analysis of sand Iraqi Sulphate Resistant Cement that is used in mixes Table3, and Table4, show chemical and physical properties of cement, respectively, and confirm to IQS 5.CEM I 42.5SR[11].
Table 3 .
chemical analysis of cement
Table 4 .
[12]ical properties of cement2.1.5.WaterBecause it actively takes part in the cement chemical reaction, water is a crucial component of concrete[12].Since it gives cement concrete its strength, it necessitates a very careful examination of the quantity and quality of water.Acids, alkalis, sugars, salts, oils, and hazardous organic materials are not present in tap water.Its PH value of 7.01 satisfies IS 456-2000's specifications.Additionally, it is utilized for specimen curing and mixing concrete.
Table 6 .
results of wet density for concrete mixes
Table 7 .
tensile strength of concrete mixes at 28 days of age
|
2023-09-26T20:03:03.263Z
|
2023-09-01T00:00:00.000
|
{
"year": 2023,
"sha1": "05e767cd313353252f8a676213ae5a28d08149d1",
"oa_license": "CCBY",
"oa_url": "https://iopscience.iop.org/article/10.1088/1755-1315/1232/1/012042/pdf",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "05e767cd313353252f8a676213ae5a28d08149d1",
"s2fieldsofstudy": [
"Environmental Science",
"Engineering",
"Materials Science"
],
"extfieldsofstudy": [
"Physics"
]
}
|
18691716
|
pes2o/s2orc
|
v3-fos-license
|
A population-based study of asthma, quality of life, and occupation among elderly Hispanic and non-Hispanic whites: a cross-sectional investigation
Background The U.S. population is aging and is expected to double by the year 2030. The current study evaluated the prevalence of asthma and its correlates in the elderly Hispanic and non-Hispanic white population. Methods Data from a sample of 3021 Hispanics and non-Hispanic White subjects, 65 years and older, interviewed as part of an ongoing cross-sectional study of the elderly in west Texas, were analyzed. The outcome variable was categorized into: no asthma (reference category), current asthma, and probable asthma. Polytomous logistic regression analysis was used to assess the relationship between the outcome variable and various socio-demographic measures, self-rated health, asthma symptoms, quality of life measures (SF-12), and various occupations. Results The estimated prevalence of current asthma and probable asthma were 6.3% (95%CI: 5.3–7.2) and 9.0% (95%CI: 7.8–10.1) respectively. The majority of subjects with current asthma (Mean SF-12 score 35.8, 95%CI: 34.2–37.4) or probable asthma (35.3, 34.0–36.6) had significantly worse physical health-related quality of life as compared to subjects without asthma (42.6, 42.1–43.1). In multiple logistic regression analyses, women had a 1.64 times greater odds of current asthma (95%CI: 1.12–2.38) as compared to men. Hay fever was a strong predictor of both current and probable asthma. The odds of current asthma were 1.78 times (95%CI: 1.24–2.55) greater among past smokers; whereas the odds of probable asthma were 2.73 times (95%CI: 1.77–4.21) greater among current smokers as compared to non-smokers. Similarly fair/poor self rated health and complaints of severe pain were independently associated with current and probable asthma. The odds of current and probable asthma were almost two fold greater for obesity. When stratified by gender, the odds were significantly greater among females (p-value for interaction term = 0.038). The odds of current asthma were significantly greater for farm-related occupations (adjusted OR = 2.09, 95%CI: 1.00–4.39); whereas the odds were significantly lower among those who reported teaching as their longest held occupation (adjusted OR = 0.36, 95%CI = 0.18–0.74). Conclusion This study found that asthma is a common medical condition in the elderly and it significantly impacts quality of life and general health status. Results support adopting an integrated approach in identifying and controlling asthma in this population.
Background
The U.S. population is aging and is expected to double by the year 2030, with the elderly comprising up to 20 percent of the total population. The population age 85 years and older will reach 21 million by the year 2050 [1]. Additionally, baby boomers will also reach 65 years of age in less than a decade. Therefore, epidemiologic studies of aging and age-associated diseases have national relevance.
Despite the worsening national trends for asthma for the past 25 years, bronchial asthma in the elderly has not received as much attention as asthma among children and adults. Many national and international studies exclude elderly when studying asthma, partly because asthma is difficult to distinguish from chronic obstructive pulmonary disease and congestive heart failure in older age [2,3]. However, recent studies have indicated that asthma is not an uncommon condition among the elderly. In the U.S., prevalence of asthma among the elderly range between 4% and 10% [4][5][6][7]. According to the Centers for Disease Control and Prevention (CDC) self-reported asthma rates in the elderly U.S. population increased sharply from 31 per 1000 in 1980 to 45 per 1000 in 1994 [8].
Long-term exposure to occupational agents at the workplace may result in poor quality of life later in life; however, the precise relationship between different occupations and asthma has not been studied previously in the elderly. According to state projections, by the year 2025 Texas will have the third largest population of individuals aged 65 and older after California and Florida [9]. Since morbidity due to asthma is on the rise, understanding factors associated with asthma and its association with the quality of life of older individuals is important. In this study the prevalence of asthma and asthma symptoms and their relationship with occupation and health related quality of life were estimated among older individuals in a largely sparsely settled region of west Texas.
Methods
The study data were collected as part of a large ongoing telephone-based cross-sectional study of individuals 65 years and older residing in 108 counties that comprise west Texas. A detailed description of the survey methods has been previously described [10]. Three waves of the surveys have been completed. The original sample comprised of 5006 subjects. The focus of wave-3 of the survey was respiratory conditions and symptoms and their effects on the older population. The cooperation rate for the wave-3 survey (completed interviews/ (completed inter-views+ refusals)) was 90.4%; the response rate (completed interviews/ (completed interviews + refusals + eligible non contact)) was 86.7% [11,12]. The analysis for the present study was limited to the third wave of the survey, conducted from October 2001 through December 2001. Of the 3392 subjects interviewed during this third wave, 237 reported a prior history of emphysema, as determined by an affirmative response to the question, "Have you ever been diagnosed by a physician to have emphysema?" and were excluded from the analysis, leaving a sample of size 3155. Of these, 3021 were non-Hispanic whites or Hispanics and were included in the final analysis. During wave 3 of the survey, subjects were asked questions on general demographics, presence of asthma, asthma symptoms, allergies, smoking habits, housing characteristics, family history of asthma and allergies, chronic bronchitis and emphysema (collectively referred to as "COPD"), health-related quality of life (SF-12), and asthma-specific quality of life (mini Asthma QoL).
Asthma-related questionnaire items in this study were derived mainly from the International Union Against Tuberculosis and Lung Disease (IUATLD) bronchial symptom questionnaire [13] which has been previously validated in several countries. In addition, a cluster of five previously validated questions on asthma symptoms, collectively referred to as the Discriminative Function Predictor (DFP) were included in the final questionnaire.
Dependent variable
Our main outcome was a three category asthma variable coded as no asthma (reference category), current asthma, and probable asthma. Current asthma was defined as those responded in affirmative to questions, "have you ever been diagnosed by a physician to have asthma?" and "Do you still have asthma?" Diagnosis of asthma made by a health care provider still remains the most common approach used to define asthma in epidemiological studies [14]. The approach used in defining current asthma is similar to that used regularly in the U.S. National Health Interview Survey (NHIS) [15]. It is on this basis that the NHIS establishes its national prevalence estimates for current asthma. Probable Asthmawas defined using the weighted 5-item asthma symptoms questions, collectively referred to as Discriminant function predictor (DFP) [13]. The items included in DFP were weighted using the following logit equation: Logit P(X) = (-2.92) + 1.42(W) +1.39(SOB) + 1.00(TRB_C)+ 1.51(TRB_N) +2.37 (CT_D) where W = wheezing in the past 12 months; SOB = nocturnal shortness of breath in the past 12 months; TRB_C = continuous trouble with breathing; TRB_N = breathing is never quite right; CT_D = chest tightness around dust, animals, or feathers. To construct the variable "probable asthma" we used logit coefficients to generate logit scores. The default cut-off value of p > 0.5 was used to classify subjects as having probable asthma. Based on these criteria a total of 207 subjects were classified as having current asthma and a total of 265 subjects were classified as having probable asthma; these two groups did not overlap. A total of 2,549 subjects were classified as having neither current nor probable asthma.
Occupations
Each study subject was asked about their longest held occupation. This question was derived from the National Health and Nutrition Examination Survey III (item HAS17R) and asked from each study participant: "Thinking of all the paid jobs or businesses you ever had, what kind of work were you doing the longest?" [16]. Occupations were coded using the1980 U.S. Bureau of Census Occupational Classification Codes [17]. Those who reported never having worked (n = 312) and those who employed in the Armed Forces (n = 61) were excluded from the analysis. Based on prior studies by the authors [18], together with a review of literature, the coded occupations were grouped into seven categories: administrative/secretarial, healthrelated, teaching, service-related, farm-related, precision production, and other occupations.
Health-related Quality of Life (QoL)
The Medical Outcomes Study Short Form-12 (SF-12) health-related quality of life instrument was administered to all study participants. The SF-12, an abbreviated version of the SF-36, is commonly included in populationbased studies to assess perceived health status [19], and its use has been validated in studies of older persons [20] and in clinical and community settings [21]. Scores on the 12 items were used to create two separate summary scores: a physical component score (PCS) and a mental component score (MCS). Scores ranged from 0 (the worst possible health) to 100 (the best possible health). In addition, the mini Asthma Quality of Life (mini-Asthma QoL) questionnaire was administered to those study participants who met the case definition for current asthma (n = 207). Mini-Asthma QoL measures functional impairments that are most troublesome to subjects with asthma during the 2 weeks prior to responding to the survey, and has four domains: 1) symptoms (5 items); 2) activity limitation (4 items); 3) emotional function (3 items); and 4) environmental stimuli (3 items). All responses were recorded on a 7-point Likert scale (from 1 = maximum impairment to 7 = no impairment). Responses to both the SF-12 scale and mini-Asthma QoL were scored according to published guidelines [21,22].
Other measures
The following covariates were also included in the analysis: 1) age (four categories); 2) sex (male, female); 3) education level (four categories); 4) income level (four categories); 5) geographic location (urban, rural); 6) history of hay fever; 7) pet ownership (three categories); 8) smoking status (non-smoker, current smoker, and past smoker): this variable was defined using the two questions: have you smoked at least 100 cigarettes during your entire life? Those who replied "yes" were asked Do you smoke cigarettes now?; those who responded in affirmative to both questions were classified as current smoker, those who smoked cigarettes in the past but no longer smoke cigarettes were classified as past smoker, and those who stated that they never smoke at least 100 cigarettes in their entire life were classified as non-smoker; 9) environmental tobacco smoke was defined based on responses to the question "other than the [respondent] how many people in home smoke?" 10) self-rated health was assessed using the question: "in general, would you say your health is excellent, very good, good, fair, or poor?" The responses were dichotomized into excellent/good and fair/poor; 11) complaint of pain: respondents were asked how often they were troubled with pain and how bad was their pain most of the time. The responses were grouped into three categories: no pain, mild pain, and severe pain; 12) body mass index (BMI): The BMI was defined as the weight in kilograms divided by the height in metres squared (kg/m 2 ). This variable was computed based on self-reported weight and height and categorized into: normal weight (BMI <25), overweight (BMI 25-29.9), and obese (BMI = 30). Missing values were coded as a separate category; and 13) health insurance status. Nocturnal symptoms of asthma were defined using the question (asked separately for each symptom): "At any time in the last 12 months, have you been awakened at night by an attack of: 1) wheezing, 2) chest tightness, 3) shortness of breath, 4) cough." To compare our study results with the prior published studies of asthma in the elderly, we performed a comprehensive MEDLINE search for English language articles published between 1966 and April 2005, using keyword terms "asthma" "elderly", "Health surveys or prevalence", and "Epidemiology". A total of 13 population or community-based studies were identified and data on type of the study, sample size, response rate, definition of asthma, and prevalence estimates of asthma were abstracted and summarized (Table 6). Only those studies which enrolled subjects aged 65 years and older, with clearly defined asthma as one of the outcome variables, and published prevalence estimates of asthma, were included in the summary table.
Statistical analysis
Comparison of the sample data to the U.S. Census 2000 data for west Texas suggested that the sample slightly underestimated the proportion of Hispanics and overestimated women. Therefore, data were weighted using poststratification. The post-stratification adjustment cells were made up of age (65-69, 70-74, 75-79, and 80+), sex (Male, Female) and ethnicity (Hispanics, non-Hispanic White) categories. First, the census data (for 108 west Texas counties) and the wave-3 sample were stratified by age, sex, and ethnicity; then, an adjustment factor was computed by dividing the census cell proportion by the sample cell proportion. Finally, sampling weights were computed using the following formula [23]: Final Weight = (Total Number in Census Population/Total # in Sample) * Adjustment Factor Weighted prevalence estimates and their corresponding 95% confidence intervals were computed. Since the outcome variable was categorical, polytomous logistic regression analyses were used to compute the odds ratios and their corresponding 95% confidence intervals. In polytomous logistic regression, the odds of current and probable asthma were simultaneously compared to no asthma, the common reference category. Odds ratios were adjusted for age, sex, race/ethnicity, smoking status, and history of hay fever. STATA statistical software version 9.0 (Stata Corp, College Station, TX), which incorporated sampling weights, was used for all the analyses.
Results
The socio-demographic sample characteristics of the study are presented in Table 1. The mean age of the study participant was 75.5 years (SD = 6.4). Of the 3021 participants, 878 were male and 2143 were female. Approximately 19% were obese (BMI = 30). The prevalence patterns of current and probable asthma by selected characteristics are presented in Table 2. The overall weighted prevalence of current asthma was 6.3% (95%CI: 5.3-7.2), whereas an additional 9.0% (95%CI: 7.8-10.1) of the respondents had probable asthma. Hispanic Americans reported a lower prevalence of current asthma (4.0%, 95%CI: 1.9-6.1) as compared to non-Hispanic whites (7.1%, 95%CI: 6.1-8.1). No significant race/ethnic differences were observed for probable asthma ( Table 2). The prevalence estimates of current and probable asthma were slightly higher among females as compared to males. More than half of the sample were non-smokers (Table 1); only 8.2% reported currently smoking cigarettes and the prevalence of probable asthma was significantly higher in this group (16.8%, 95%CI: 11.8-21.8) as compared to non-smokers and ex-smokers ( (Figure 1). The prevalence of current asthma was highest among those who reported farm-related occupations as their longest held job (9.5%, 95%CI: 3.6-15.4), whereas the prevalence of probable asthma was highest among those who reported service-related occupations (12.8%, 95%CI: 8.4-17.1) as their longest held occupation (Table 2). When data was separated by gender, the prevalence of probable asthma was slightly higher among women (13.0%, 95%CI: 8.1-17.8) as compared to men The outcome is a three category variable: no asthma (reference), current asthma, probable asthma. a The prevalence estimates were obtained by cross tabulating individual characteristics with the three category outcome variable (No Asthma, Current Asthma, and Probable asthma). . Approximately one-fourth of respondents reported severe pain that prevented them from performing every day activities (Table 1). The selfreported severe pain was associated with more than twice the odds of having current asthma (adjusted OR = 2.35, 95%CI: 1.64-3.36) and more than four times the odds of probable asthma (adjusted OR = 4.23, 95%CI: 2.99-5.99) when compared to those without pain. Those who reported being in fair or poor health also had more than twice the odds of current and probable asthma as compared to those who reported their health as excellent or good. Similarly, the adjusted odds of current and probable asthma were 1.98 times and 2.12 times greater among obese individuals, respectively, as compared to normal weight individuals (Table 5). A significant interaction was found between female gender and obesity (BMI = 30) for current asthma only (adjusted OR = 2.85, 95%CI: 1.06-7.66).
A significant positive association between current asthma and farm-related occupation was found in this study (adjusted OR = 2.09, 95%CI: 1.00-4.39). The odds of current asthma were significantly lower among those who reported teaching as their longest held occupation (adjusted OR = 0.36, 95%CI = 0.18-0.74) ( Table 5). Those in the service-related occupations had 1.47 times greater odds of probable asthma but the results were only Prevalence of nocturnal symptoms among subjects with cur-rent asthma compared to no asthma Figure 1 Prevalence of nocturnal symptoms among subjects with current asthma compared to no asthma.
Discussion
Asthma is a frequently overlooked and misdiagnosed medical condition in older patients. Morbidity due to asthma, if not properly diagnosed and managed, can have serious debilitating effects for older individuals. This large population based survey was an attempt to estimate the prevalence of asthma and its correlates in this population in the west Texas region.
This study found the prevalence of current asthma of 6.3% (95%CI: 5.3-7.2) and an additional 9.0% (95%CI: 7.8-10.1) had probable asthma (symptoms based definition-DFP). In our earlier analysis of NHANES III data, using a similar case definition as reported in this study, we reported a prevalence of current asthma of 3.6% The outcome is a three category variable: no asthma (reference), current asthma, probable asthma. a Adjusted for age, sex, race/ethnicity, smoking status, and history of hay fever. b Missing information was coded as a separate category. c Occupations were coded using dummy variable approach and each occupation was regressed separately. (95%CI 2.9-4.2) in the U.S. population aged 60 and above [24]. The review of previously published population-based studies in the elderly suggests a wide variation in the prevalence of asthma ( Table 6). The U.S. studies, on average, have reported a lower prevalence of asthma [4][5][6][7] as compared to European studies [25][26][27][28][29][30]. The four previous studies from the U.S., included in the summary table, found a median prevalence of asthma of 4.7% (range 3.9% to 10%); whereas the median prevalence from the six European studies was 7% (range 6% to 8.4%) ( Table 6). The three studies from the Asia-Pacific region [31][32][33] reported a median prevalence of asthma of 5.5% (range 3.9-10.5). The wide variation in reported prevalence estimates could in part be due to use of different case definitions of asthma or different geographical region which complicates comparison among studies.
Some of the previously well recognized correlates of asthma, such as female gender, low socioeconomic status (as measured by education and income) and hay fever were also identified in this study [24,34]. In addition, smoking, poor health-related QoL, obesity, and certain occupational groups were associated with current or probable asthma.
Associations between smoking and asthma remain a subject of debate. In this study the prevalence of probable asthma was approximately 17% among current smokers; ex-smokers had a higher odds of current asthma (adjusted OR = 1.78, 95%CI: 1.24-2.55) whereas current smokers were more likely to have probable asthma (adjusted OR = 2.73, 95%CI: 1.77-4.21). In a recent study, Hardie et. al., [25] reported a greater than two-fold increased odds of current asthma among ex-smokers age 70 years and older. Similarly, a recent incident case-control study reported an increase risk of asthma among ex-smokers [35]. Prior population-based surveys, focusing on younger adults, have largely failed to find such an association. In the NHANES III analysis, a positive association of current smoking was found with the presence of wheezing, but not with current asthma, suggesting possible confounding or misclassification with non-asthma causes of wheezing, such as emphysema or chronic bronchitis [24]. Similarly, results from the European Community Respiratory Health Survey (ECRHS) also found no association of asthma with either a current or past history of smoking [34]. Since ECRHS is a study of young adults, it is possible that, being of lesser duration, exposure to tobacco smoke has not yet had time to cause serious damage to airways that may contribute to the appearance of asthma. An alternative explanation could be that the general decline in prevalence of smoking in most developed countries partly explains the lack of association observed in the younger population. The results of this study suggest that despite quitting smoking, the airway damage is not completely reversible. However, further studies are needed in older populations to assess the long term impact of smoking on asthma.
Self-rated health is considered a valid measure of person's health [36,37] and has been shown to relate directly to quality of life [38]. The SF-12 has been used previously to measure health outcomes for persons suffering from asthma [39] and COPD [40]. Use of both generic and asthma specific QoL measures are recommended to assess the impact of asthma on patient's daily life [41]. In a large community survey of elderly, Enright and colleagues [6] reported that subjects with asthma had significantly lower QoL and higher degree of impairment of activities of daily living. They were more likely to report symptoms of depression and poor general health. Similarly, Nejjari and colleagues [42] in a population based case-control study reported that older subjects with asthma were more likely to report lower QoL than controls. Breathlessness was reported as a major cause of lower QoL. In this study more than one-third of participants rated their health as fair or poor. Among those with current and probable asthma this percentage increased to approximately 50% and 60%, respectively. Since such a large proportion of subjects with probable asthma (i.e., without a clinical diagnosis of asthma) complained of poor health, it is possible they represent a group with as yet undiagnosed (and, hence, untreated) asthma. In addition, both current and probable asthma were associated with severe pain, poor physical health related quality of life and poor performance on the mini-Asthma QoL environmental domain subscale, all of which add consistency to this impression.
Although several recent studies are finding an association, the relationship between asthma and obesity remains controversial or, at best, unexplained. This association has been observed in children and adults, [24,43] as well as among nurses [44] and other health care workers (author's unpublished data). In this study we report a positive association between current asthma and obesity in the elderly, which was only significant among females (adjusted OR = 2.74, 95%CI 1.74-4.33; p value for interaction term = 0.038). The interaction term was not statistically significant for probable asthma. These findings are consistent with those of other population-based studies [44][45][46][47]. Beckett et al., [46] in a prospective study of 4547 African-American and White men and women, found a significant association between incident asthma and body mass index in females only. Camargo and colleagues, in a prospective study of registered nurses, found an association between body mass index and incident cases of asthma [44]. Chen et al., [47] in a large longitudinal study of the Canadian population reported a significant association between obesity and development of asthma among women. However, these and our results contrast some-what to recently reported results from an incident casecontrol study on Swedish adults which reported an odds ratio of 3.0 and 3.3 in both females and males respectively [35]. The authors included 309 cases of incident asthma of which 202 (65%) were women. Although the authors enrolled an equal number of controls, they did not provide information on the gender distribution of this comparison group. Moreover, they did not adjust their results for known confounders including smoking and hay fever, which in part could explain the discrepant findings. Although a biological mechanism to explain such an association remains elusive, the strong evidence observed across all age groups, among different occupational groups, and from studies of all types (cross-sectional surveys, case-control studies and prospective studies) suggests a possible causal relationship between obesity and asthma.
In this study two occupational groups were significantly associated with current asthma. Those who reported teaching as their longest held occupation were 0.36 times less likely to have current asthma. This is in contrast to other reports that found higher rates of asthma among teachers including our own studies in other populations [18,48]. Kraut et. al., [48] reported elevated odds ratios for "other teaching and related occupations" (OR 2.54, 95% CI 1.18-5.44); Whelan et. al., [49] reported higher prevalences of work-related upper respiratory symptoms and wheezing among teachers, but not asthma. Differences in the study population could in part explain the discrepant findings. Alternately, the lower odds observed among teachers in this study could reflect a cohort effect. Following the energy crisis of the 1970s, schools were made more airtight. This resulted in school buildings with poor ventilation and excess moisture, and the subsequent risk of exposure to multiple antigens, including mold and other indoor air contaminants [50,51]. It is plausible that teachers in this group may have worked in this profession before changes were made to school building codes, and may not have been exposed to the poor indoor air quality and other environmental conditions that are being reported by the younger working population.
Farm-related occupations have previously been reported to be associated with asthma among adults, as is in this study. In the present study, subjects with farm-related occupations had twice the odds of current asthma. When the data were stratified by gender, the association was primarily seen in males (adjusted OR = 2.51, 95%CI: 1.02-6.21). There was no difference in the prevalence of hay fever among those with or without farming occupations, raising the possibility that the increased prevalence of current asthma in this population is of non-allergic origin. This is consistent with recently reported findings in Norwegian farmers with current asthma that was of non-atopic origin [52]. In our earlier analyses of NHANES III data, [18] a greater than four-fold odds of work-related asthma (OR = 4.22, 95%CI: 1.76-10.10) was observed among those with farm-related occupations. In the French PAQUID cohort, retired farm workers (aged 65 and older) had more than five times the odds (OR = 5.35, 95%CI: 1.33-21.50) of current asthma [27]. Similarly, Kogevinas et. al., [53] reported an odds ratio of 2.62 (95% CI 1. 29-5.35) among farmers who participated in the ECRHS.
The service-related occupation group had significantly higher odds of probable asthma in the unadjusted analyses only. The three major groups that made up this occupational category were: food-related, housekeepers/ janitors, and hairdressers. All of these occupations, which involve use of chemicals and substances that are respiratory irritants, have previously been associated with increase risk of asthma [18,54].
There were some limitations of this study. Since the study was cross-sectional in nature, cause and effect relationship cannot be established. There were 41 subjects who reported having both current asthma and chronic bronchitis; inclusion of these subjects caused a slight overestimation of current asthma prevalence. Respondents with chronic bronchitis were not excluded from the analysis because symptoms of asthma and chronic bronchitis can overlap, especially in the old age. Smoking confounds asthma and subjects with asthma tend to become incomparable with regard to smoking habits than those without asthma. It was difficult to separate these associations in a cross-sectional survey. Another limitation of the study is possible misclassification of current asthma status. Study respondents whose asthma was in control or in remission at the time of study may have responded as not having asthma and hence been classified as being non-asthmatic; however, if their asthma was not under control, they may have responded affirmatively to questions on asthma diagnosis, being thus classified as having current asthma. The survey sample attrition over time is also a potential concern. However, no evidence for differential survival was found in the study. With advancing age, quality of life in asthmatics can be compromised due to the concurrent presence of other chronic medical conditions, which could also partly explain the poor physical QoL observed in this study. However, our results are consistent with earlier findings where both the moderate or severe persistent asthma was associated with poor QoL among the elderly [55]. Finally, no reference values are available for mini Asthma QoL in the general elderly population; this fact, in addition to absence of indoor monitoring data, makes the interpretation of low scores on the environmental domain subscale of the Asthma QoL (reflecting poor QoL) difficult.
Conclusion
This study found that asthma is a common medical condition among the elderly. Several factors including female gender, low socio-economic status, hay fever, obesity, and smoking status were associated with current or probable asthma. The majority of subjects with current or probable asthma rated their health as fair or poor and their quality of life was compromised. Male farmers had higher odds of current asthma; whereas lower odds of current asthma, possibly due to a cohort effect, were observed among those who were in a teaching occupation.
|
2016-05-04T20:20:58.661Z
|
2005-09-21T00:00:00.000
|
{
"year": 2005,
"sha1": "60a55b83384747da68fe6dba396e1ef58f371b2f",
"oa_license": "CCBY",
"oa_url": "https://bmcpublichealth.biomedcentral.com/track/pdf/10.1186/1471-2458-5-97",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "4306eae1e72504ae0b5dff6fd550501920bceae9",
"s2fieldsofstudy": [
"Medicine",
"Sociology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
245854631
|
pes2o/s2orc
|
v3-fos-license
|
Give or Take: Effects of Electron-Accepting/-Withdrawing Groups in Red-Fluorescent BODIPY Molecular Rotors
Mapping microviscosity, temperature, and polarity in biosystems is an important capability that can aid in disease detection. This can be achieved using fluorescent sensors based on a green-emitting BODIPY group. However, red fluorescent sensors are desired for convenient imaging of biological samples. It is known that phenyl substituents in the β position of the BODIPY core can shift the fluorescence spectra to longer wavelengths. In this research, we report how electron-withdrawing (EWG) and -donating (EDG) groups can change the spectral and sensory properties of β-phenyl-substituted BODIPYs. We present a trifluoromethyl-substituted (EWG) conjugate with moderate temperature sensing properties and a methoxy-substituted (EDG) molecule that could be used as a lifetime-based polarity probe. In this study, we utilise experimental results of steady-state and time-resolved fluorescence, as well as quantum chemical calculations using density functional theory (DFT). We also explain how the energy barrier height (Ea) for non-radiative relaxation affects the probe’s sensitivity to temperature and viscosity and provide appropriate Ea ranges for the best possible sensitivity to viscosity and temperature.
Introduction
Fluorophores that are sensitive to environmental properties are very useful in biological studies and help to understand changes in the intracellular environment. Fluorescent molecular probes are widely used for imaging polarity [1,2], temperature [3,4], and microviscosity [5][6][7]. Fluorescent sensors have already been utilised in experimental objects, such as live cells [8][9][10], various organelles [3,11,12], polymers [13,14], aerosols [15,16], and lipid membranes [17][18][19]. The working principle of the majority of such sensors is based on the competition between fluorescence and non-radiative relaxation, where the rate of the latter is heavily affected by either viscosity, temperature, or polarity [20]. Together with fluorescence microscopy or fluorescence lifetime imaging microscopy (FLIM), fluorescent probes provide a non-invasive method for imaging changes in the medium [11,21,22].
Some of the most popular microviscosity probes are based on the BODIPY group [8,23,24], such as BODIPY-C 10 ( Figure 1). They are generally used as fluorescence lifetime probes rather than simple fluorescence intensity probes due to the independence of lifetime on local concentration of fluorophores and the conditions of excitation and detection [25]. These probes stand out for having a monoexponential fluorescence decay, which simplifies data analysis [26]. Studies about polarity-and temperature-sensitive BODIPY-based fluorophores have also been published [9,27,28]. Furthermore, it has already been shown that meso Figure 1. Molecular structures of BODIPYs investigated in this research (BP-PH-CF 3 , BP-PH-OMe, BP-PH-8M) together with previously reported BP-PH [33] and a well-known viscosity probe BODIPY-C 10 .
Absorbance and Fluorescence Spectra
We begin by investigating basic spectroscopic properties of the new fluorophores and compare them to those of BP-PH and BODIPY-C 10 . The absorbance spectra (Figure 2A) of all dyes show a higher energy band at 300-450 nm and the main absorption band in the 500-630 nm region. BP-PH shows an increased absorption wavelength (∆λ = 90 nm) compared to the well-studied BODIPY-C 10 due to the extension of conjugation. The addition of EWG in β-phenyls (BP-PH-CF 3 ) results in a small blue-shift of the main absorption band compared to BP-PH (∆λ = 15 nm). An opposite effect is observed when the EDG is attached to β-phenyls (BP-PH-OMe), resulting in the highest absorption wavelength of 625 nm. The position of the main absorption band of the new derivative BP-PH-8M is blueshifted by 60 nm in contrast to previously reported BP-PH. The shift is caused by the addition of the methyl group in β-phenyls and the BODIPY core, which restricts the conjugation in BP-PH-8M. Very similar blue-and red-shift tendencies are observed in the fluorescence spectra ( Figure 2B). BP-PH-8M and BP-PH-CF 3 correspond to the shorter wavelengths with the fluorescence maximum at 565 nm and 610 nm, respectively. In contrast, BP-PH-OMe shows the largest bathochromic shift, with the fluorescence peak at 680 nm due to an introduction of the EDG. A Stokes shift for unreported derivatives BP-PH-8M, BP-PH-CF 3 , and BP-PH-OMe was 1169 cm −1 , 998 cm −1 , and 1294 cm −1 , respectively. The results also show that EWG at β-phenyls decreases the Stokes shift, while the opposite happens when EDG is introduced. As a result, by varying substituents on the β-phenyls, we were able to tune the emission wavelengths of new BODIPY fluorophores over a 500-700 nm range. In addition, quantum yield (QY) measurements in toluene demonstrated that attaching EWG to β-phenyls increases the QY value by 29% (BP-PH-CF 3 ), while EDG reduces it by 16% (BP-PH-OMe), with respect to previously reported BP-PH without β-phenyl substitutes [33]. BP-PH-8M showed the highest QY of 87% due to restricted intramolecular rotation of βphenyls. The absorption, fluorescence emission, Stokes shift, fluorescence lifetime, QY, and radiative and non-radiative relaxation values of the investigated conjugates are displayed in Table 1. Normalised absorbance and fluorescence spectra of BP-PH-8M and BP-PH-CF 3 in solvents of various polarities are shown in Figure S1, ESI. Table 1. Theoretically calculated and experimental values of the peak maxima of absorption (λ A ) and fluorescence emission (λ F ) spectra, as well as Stokes shifts (ν SS ) for investigated derivatives in toluene. Experimental values of fluorescence lifetime (τ), quantum yield (QY), radiative (k r ), and non-radiative (k nr ) decay rates in toluene are also displayed.
Theoretical Calculations
The DFT calculations correctly predict the trend of increasing absorption and fluorescence wavelengths from BODIPY-C 10 to BP-PH-OMe (Table 1). The geometry of BP-PH-8M shows that the methyl groups on the BODIPY core force β-phenyls out of plane ( Figure S2, ESI), leading to weaker conjugation and shorter absorption and fluorescence wavelengths. Furthermore, the DFT calculations reveal that increasing absorption and fluorescence wavelengths, going from BP-PH-CF 3 , to BP-PH, to BP-PH-OMe, are the result of a closer energy match between HOMO of BODIPY and β-phenyls ( Figure S3, ESI). As a result, the HOMO of the resulting molecule is higher in energy, leading to a smaller HOMO-LUMO gap and higher absorption and fluorescence wavelengths. We note that theoretical wavelengths are shorter than experimental wavelengths by approximately 100 nm. This is a result of a well-known weakness of DFT, resulting in the overestimation of electronic transition energies in BODIPY fluorophores [34][35][36].
Meso-phenyl BODIPYs are known for their viscosity and temperature sensitivity, which arises due to the competition between fluorescence and non-radiative relaxation [25].
The key factor affecting the rate of non-radiative relaxation is the height of energy barrier that the molecule needs to cross during the rotation of meso-phenyl in order to relax non-radiatively ( Figure 3A) [37][38][39][40]. Therefore, we calculated the barriers for the new BODIPY compounds and contrasted them with the barriers for BODIPY-C 10 and BP-PH ( Figure 3B) [33]. BP-PH-8M has by far the highest barrier due to extra methyl groups that prevent the rotation of meso-phenyl. This explains why BP-PH-8M has the highest quantum yield of fluorescence and the slowest non-radiative decay rate ( Table 1). The remaining molecules have smaller barriers, resulting in a faster non-radiative relaxation. (orange), BP-PH-CF 3 (green), BP-PH (red), and BP-PH-OMe (purple) calculated using DFT. θ is a dihedral angle between the BODIPY core and the meso-phenyl group. The S 1,m minima of curves were set to 0 for easier comparison.
Time-Resolved Fluorescence and Its Sensitivity to Viscosity, Temperature and Polarity
In order to test if the new molecules could be used as probes of their environment, we explored their viscosity, temperature, and polarity sensing capabilities. Viscosity sensitivity measurements were performed in non-polar toluene/castor oil mixtures, covering the viscosity range of 0.5-920 cP. The majority of the observed fluorescence decays were monoexponential. The average lifetimes of biexponential decays were calculated using Equation (7) (Materials and Methods section), owing to the small contribution from the self-fluorescent castor oil. The viscosity-dependent fluorescence decays showed a slight viscosity dependence for BP-PH-CF 3 ( Figure 4B) and almost no viscosity dependence for BP-PH-8M or BP-PH-OMe ( Figure 4A,C). Overall, the new derivatives showed much lower viscosity sensitivity compared to the viscosity sensor BODIPY-C 10 ( Figure 4D). This is expected, as theoretically calculated energy barriers for viscosity-sensitive nonradiative relaxation are larger than those for BODIPY-C 10 . Thus, these results support the hypothesis that a large energy barrier predicted by the DFT calculations leads to little to no viscosity sensitivity [33,41]. Conjugates BP-PH-8M and BP-PH-CF 3 showed very similar fluorescence lifetimes, in a 3-5 ns range, to the previously reported conjugate without additional moieties (BP-PH). Meanwhile, BP-PH-OMe showed much shorter lifetimes, around 1.5 ns. Temperature-dependent fluorescence decays recorded in toluene reveal that all new fluorophores exhibited moderate temperature dependence ( Figure 4E-G). The extent of temperature dependence was similar to BP-PH, although new conjugates BP-PH-8M and BP-PH-CF 3 showed longer lifetimes. BP-PH-OMe showed the weakest temperature sensitivity; its small lifetime values were more comparable to the widely studied BODIPY-C 10 , which showed low lifetimes in the low viscosity solvent (toluene). Fits of Figure 4H are shown in Figure S4, ESI.
Lastly, the polarity dependence experiments were performed ( Figure 4I-L). BP-PH-8M stood out among all the fluorophores; its sensitivity to solvent polarity was minimal ( Figure 4I). The key structural difference of BP-PH-8M compared to other fluorophores was the existence of the methyl groups that prevented the rotation of the meso-phenyl group ( Figure 1). Therefore, it is likely that the non-radiative relaxation pathway responsible for the polarity sensitivity involves the rotation of the meso-phenyl substituent. This particular intramolecular rotation is known to result in non-radiative relaxation of meso-phenyl-BODIPYs [42]. However, it also causes viscosity-sensitivity [25], which BP-PH-OMe does not possess. Therefore, another non-radiative relaxation pathway is likely to also involve the rotation of the meso-phenyl group.
BP-PH-CF 3 with the EWG substitute showed moderate polarity-sensitive properties. The kinetics of trifluoromethyl-substituted BODIPY ( Figure 4J) was split into two groups: one consisted of non-polar and medium-polar solvents (cyclohexane, toluene, chloroform, and DCM) and the other had very polar solvents (DMSO and methanol). Meanwhile, EDGsubstituted BP-PH-OMe showed strong polarity dependence and gradually decreasing lifetimes with increasing solvent polarity ( Figure 4K). Thanks to this polarity dependence, the methoxy-substituted conjugate could be used as a red-emitting lifetime-based polarity sensor. Compared to the absolute majority of other known fluorescent polarity sensors, such as the Reichardt's dye [43], BP-PH-OMe showed a far smaller solvatochromic shift ( Figure 5). However, the constant spectral position of BP-PH-OMe ( Figure 5) is an advantage if the FLIM technique is used. The spectral detection window can be correctly chosen beforehand without the need to guess, while the polarity can be determined from the fluorescence lifetime of BP-PH-OMe. The visually observed trends ( Figure 4D,H,L) can be quantified using the relative sensitivity S [44]: where τ is a fluorescence lifetime, δx is a change of the parameter (temperature-°C, polarity-∆f ). The S is expressed as a percentage change of lifetime per step change of the parameter. The calculated values of relative sensitivity to temperature and polarity are shown in Table 2. The higher the value, the stronger the sensitivity of the fluorophore to the particular environmental parameter. The sensitivity to viscosity is usually quantified using the x value from the Förster-Hoffmann equation [45]: where τ is fluorescence lifetime, η is viscosity, and C, x are constants. Unsurprisingly, a widely used viscosity sensor BODIPY-C 10 showed the highest value of the viscosity sensitivity (x = 0.21, Figure S5, ESI). The remaining molecules displayed very minor viscosity sensitivity (≤0.05). All five conjugates showed small-to-moderate temperature sensitivity as a percentage change in lifetime per one degree Celsius, which was in the 0.3-1.0% range. However, a significant polarity sensitivity was observed for BP-PH-CF 3 , BP-PH, and BP-PH-OMe, the latter showing the strongest sensitivity to polarity (265%). Therefore, our results show that BODIPY compounds with β-phenyl substituents can be tuned for making new temperature and polarity sensors. In addition to the BODIPY fluorophores investigated in this work, there are a number of other known BODIPY probes that are sensitive to viscosity and temperature [9,10,21,28]. It is already known that the energy barrier for non-radiative relaxation is the key parameter affecting viscosity and temperature sensitivity [10,40,41]. Therefore, we set out to find optimal values that the energy barrier must have in order for the molecule to be the best sensor of viscosity or temperature. We started first with the viscosity probes.
The fluorescence lifetime and intensity of viscosity-sensitive fluorophores typically show a sigmoidal dependence on viscosity on a double logarithmic plot, as shown in Figure 6, when the fluorophore is characterised over a sufficiently large viscosity range [29,41]. If the fluorophore is characterised only at intermediate viscosities, a linear viscosity-fluorescence lifetime (or intensity) dependence is observed on a logarithmic plot [24,46], which conforms to the Förster-Hoffmann equation (Equation (2)). Deviations from linearity occur due to the fact that a fluorophore cannot have a fluorescence lifetime equal to zero or infinity. The maximum possible fluorescence lifetime is set by a radiative decay constant, while the minimum possible lifetime is limited by the time required for the molecule to change its geometry and relax to ground state at zero viscosity and infinite temperature. The full sigmoidal viscosity-lifetime dependence is described by Equation (3), which was derived using the Förster-Hoffmann equation (Equation (2)) as a starting point [29]: where τ is a fluorescence lifetime, η is the dynamic viscosity, C and x are constants, E a is the activation energy for non-radiative relaxation, k nr,max is the non-radiative decay constant at zero viscosity and infinite temperature, k is the Boltzmann's constant, T is the temperature in Kelvin, k r is the radiative decay constant, k x is the the sum of any other viscosity-and temperature-independent rate constants that lead to the population loss from the fluorescent state, and τ min and τ max are the minimum and maximum fluorescence lifetimes of the probe, respectively. Usually, it is assumed that the parameter x, which comes from the Förster-Hoffman equation, is the most important parameter that shows sensitivity of the molecule to viscosity [25,26,29,46,47]. However, a high parameter x has disadvantages. For instance, increasing it from 0.5 to 0.9 ( Figure 6) shortens the viscosity sensitivity range. This creates a viscosity probe that can only be used for a limited range of viscosities. In our opinion, a parameter that better reveals the applicability of the viscosity sensor is its dynamic range, which is equal to the ratio of fluorescence lifetimes at infinite and zero viscosity at room temperature (τ η=∞ /τ η=0 ). As shown by simulated time-resolved fluorescence decays in Figure S6 (ESI), if the ratio is not sufficiently high, the sensor shows similar response at both high and low viscosities. Therefore, its applicability suffers and a high constant x would not make this sensor a useful viscosity probe. Since our goal is to determine the values of the energy barrier that are suitable for a viscosity sensor, we derived how τ η=∞ /τ η=0 depends on the energy barrier: Full derivation of Equation (4) is provided in the ESI. Equation (4) shows that the dynamic range of the probe depends on two parameters: the ratio τ max /τ min and the height of the energy barrier (E a ). The lifetimes τ min and τ max correspond to zero viscosity, infinite temperature, and infinite viscosity, 0 K temperature, respectively. The τ max /τ min ratio for the derivatives examined in this research can be obtained from the fitting parameters in Table S1, ESI, and is approximately equal to 500. Figure 7A displays how the dynamic range of viscosity probe depends on the E a when the τ max /τ min ratio is equal to this value. The blue colored region shows dynamic range values between 5 and 50 and is considered a good dynamic range for a moderate-viscosity sensor. The upper bound (τ η=∞ /τ η=0 > 50, the red colored area) is set by the typical time resolution of TCSPC or FLIM [48]. A viscosity sensor with τ η=∞ /τ η=0 > 50 could only be used for imaging high-viscosity environments, as its fluorescence lifetime at moderate viscosities would be too fast for the usual TCSPC or FLIM setups.
The calculations show that acceptable values of E a for a viscosity sensor with similar molecular structure to our investigated compounds (Figure 1) are 0.05-0.12 eV, preferably closer to 0.05 eV. These values depend slightly on τ max and τ min , which are set by the radiative decay constant and the degree of geometrical change occurring during nonradiative relaxation, respectively. The dependencies when the ratio τ max /τ min equals 100 and 2500 are shown in Figure S7 (ESI).
Next, we proceeded to theoretically estimate the optimal energy barrier height E a for a fluorescent temperature sensor. The key parameter for a temperature sensor is its temperature sensitivity, which has the following expression: where s is sensitivity, T is temperature, and τ is fluorescence lifetime. Starting with Equation (S1) (ESI), the following dependence of sensitivity on E a can be obtained: where τ T = ∞ and τ T = 0 are fluorescence lifetimes at infinite and 0 K temperature, respectively, k is Boltzmann's constant, and T is temperature. The full derivation of Equation (6) can be found in the ESI. Using this equation, we calculated how sensitivity to temperature depends on the energy barrier for non-radiative relaxation ( Figure 7B) for three different τ T = 0 ⁄ τ T = ∞ ratios. The results show that the optimal values of E a (0.10-0.20 eV) are slightly higher than those for a viscosity probe. The ratio τ T = 0 ⁄ τ T = ∞ is also important, as, for instance, if it is equal to 100, it will not be possible to reach a sensitivity of 1% at any E a value. As the value of the ratio increases (500 or 2500), it becomes easier to develop a temperature probe with good sensitivity. To get a high ratio, a molecule needs to be able to relax fast at high temperatures, thus giving a low τ T = ∞ value. This would be the case if the molecular geometry needs to change as little as possible during temperature-dependent non-radiative relaxation. Furthermore, our calculations show that it may be very challenging to obtain a fluorescent temperature probe with sensitivity exceeding 2%. Fluorescent temperature sensors based on a completely different mechanism may be required to reach sensitivities higher than that, as in the work of Xue et al. [49] and Pietsch et al. [50]. and infinite viscosities, with respect to the energy barrier height for non-radiative viscosity-dependent relaxation. τ max /τ min was set to 500 and T was set to 298 K. The shaded areas correspond to energy barrier values that would be appropriate for a moderate-viscosity sensor and a sensor suitable for high viscosities only. (B) Sensitivity to the temperature of the fluorescent temperature sensor with respect to the height of the energy barrier for non-radiative relaxation, which is temperature-dependent. The curves were calculated for three different τ T = 0 /τ T = ∞ ratios: 100 (black), 500 (red), and 2500 (blue). The shaded areas correspond to energy barrier values resulting in sensitivity above 1%.
In Figure 8, we show guidelines for energy barrier values required to obtain a fluorescent viscosity sensor or a temperature sensor. We also show E a values of BODIPY molecules investigated in this research, together with some previously reported E a values of BODIPY probes [9,10,33,41]. The scale represents which kind of sensor a BODIPY-based fluorophore is likely to be, depending on the value of the activation energy barrier, when the ratio τ max ⁄ τ min is equal to 500, which is an approximate value for BODIPY probes investigated in this work. Two alternative scales for the ratios of 100 and 2500 are shown in Figure S8, ESI. The results show that viscosity probe requires a relatively small energy barrier of < 120 meV and ideally below 100 meV. Otherwise, the probe may also have substantial sensitivity to temperature, which is not generally desired. The most popular BODIPY viscosity probe BODIPY-C 10 satisfies this condition. The optimal barrier height for a temperature probe is above 120 meV, where the viscosity-sensitivity is unlikely to be strong. This is where probes BP-PH, BP-PH-CF 3 , and BP-PH-OMe are located, although the temperature sensitivity of the latter is overshadowed by the strong polarity sensitivity. If the energy barrier exceeds 200 meV, as is the case with BP-PH-8M, the fluorophore is unlikely to show strong sensitivity to either viscosity or temperature. Knowing these guidelines makes it possible to estimate viscosity or temperature sensitivities of new probes before synthesis by calculating energy barrier values using DFT.
Dyes, Reagents, and Solvents
BODIPY-C 10 and BP-PH were synthesised as previously reported [21]. The synthesis of previously unreported derivatives, BP-PH-CF 3 , BP-PH-OMe, and BP-PH-8M, was accomplished using Suzuki reaction and is described in the ESI. Reagents and solvents for the organic synthesis of the BODIPY molecules were purchased directly from commercial suppliers; solvents were purified by known procedures. Thin layer chromatography was performed using TLC-aluminum sheets with silica gel (Merck 60 F254). Visualization was accomplished by UV light. Column chromatography was performed using silica gel 60 (0.040-0.063 mm) (Merck). NMR spectra were recorded on a Bruker Ascend 400 spectrometer (400 MHz for 1 H, 100 MHz for 13 C, 128.4 MHz for 11 B, 376.5 MHz for 19 F). NMR spectra were referenced to residual solvent peaks. Melting points were determined in open capillaries with a digital melting point IA9100 series apparatus (Thermo Fischer Scientific) and were not corrected. Stock solutions for all dyes were prepared in toluene at a concentration of 2 mM and diluted for further experiments in solvents or their mixtures. Cyclohexane, toluene, castor oil, chloroform, dichloromethane (DCM), dichloroethane (DCE), dimethyl sulfoxide (DMSO), and methanol were obtained from Sigma-Aldrich. The viscosities of toluene/castor oil mixtures were measured by using a vibrational viscometer (SV10, A&D) at temperatures of interest.
Theoretical Calculations
Quantum chemical calculations of the studied molecular rotors were performed using the electronic structure modeling package Gaussian09. [53] The calculations were based on density functional theory (DFT) [54] (for ground state properties) and time-dependent DFT (TD-DFT) [55] (for the excited state properties). M06-2X hybrid functional [56] and cc-pVDZ basis sets [57] were used at all stages of the calculations; this use was previously validated by functional benchmarks by Momeni et al. [58]. The conductor-like polarizable continuum model (C-PCM) [59] with solvent parameters of toluene was used to account for bulk solvent effects on the solute molecules.
Data Analysis
An Edinburgh-F900 software package was used for fitting fluorescence decays. For biexponential fluorescence decays, intensity-weighted lifetimes were calculated (Equation (7)): where a is an amplitude value and τ is the value of the lifetime. The goodness-of-fit parameter (χ 2 ) was 1.5 or less for single decays. Further data processing and analysis were done with Origin 2018.
Conclusions
In conclusion, we reported new β-phenyl-substituted BODIPY fluorophores showing red-shifted absorption and emission. We investigated the sensitivity of the molecules to viscosity, temperature, and solvent polarity. While BP-PH-8M did not show significant sensitivity, we showed that BP-PH-CF 3 is a moderate temperature probe. Furthermore, we showed that BP-PH-OMe has an exceptional combination of attractive properties. The fluorophore is a sensitive lifetime-based polarity sensor, it absorbs and emits in the red region of the visible spectrum, it has minimal sensitivity to other parameters, and it exhibits monoexponential decay kinetics.
Additionally, we analysed photophysical parameters that determine the viscosity or temperature sensitivity. Our theoretical results demonstrate that the sensitivity to viscosity and temperature strongly depends on the energy barrier for non-radiative relaxation. The optimal values of the barrier for a temperature probe are in the range of 100-200 meV, while a microviscosity probe should have a smaller barrier of 120 meV or less. We hope that these guidelines will help to develop new viscosity and temperature sensors as they make it easier to estimate the degree of viscosity or temperature sensitivity of probes before synthesis using theoretically calculated energy barrier values.
Author Contributions: K.M., spectroscopic characterisation, data analysis, and writing-original draft preparation; D.N., spectroscopic characterisation, data analysis, and theoretical derivations; R.Ž., spectroscopic characterisation and data analysis; J.D.-V., synthesis and characterisation of new compounds; S.T., (Stepas Toliautas) DFT calculations and data analysis; S.T., (Sigitas Tumkevičius) supervision of organic synthesis; A.V., conceptualization, writing-review and editing, supervision, and funding acquisition. All authors have read and agreed to the published version of the manuscript.
|
2021-12-24T16:07:42.119Z
|
2021-12-21T00:00:00.000
|
{
"year": 2021,
"sha1": "c39b84d4dde4b4420ea8d05aac4e3942dcffa5d1",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1420-3049/27/1/23/pdf",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "5492ba92d0290015a5633cdd23b2b5430d9ba577",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
54789780
|
pes2o/s2orc
|
v3-fos-license
|
Natural Convection-Radiation from a Vertical Base-Fin Array with Emissivity Determination
Experiments have been conducted to determine the emissivity for black chrome coated and uncoated aluminum surfaces. The emissivity of the surfaces is estimated considering combined convection radiation heat transfer and observed to be a constant in the range of 60 to 110oC. The combined heat transfer coefficients from black chrome coated vertical base vertical fin array of size 70 x 70 mm consisting of 22 aluminum fins with a fin spacing of 10 mm by natural convection and radiation has been determined at different heat inputs. Theoretical analysis of single fin geometry of constant thickness considering both convection and radiation has been used to predict the temperature distribution and heat flow. The theoretical values of heat flow estimated for a fin array is in good agreement with the experimental observations validating the emissivity of the surface. The experimental data is further validated with the equations of Nusselt presented by Churchill and Chu.
Introduction
Miniaturization of integrated circuits and reduction of spacing between chips has contributed to significant improvement in the performance of computer systems to meet high power dissipation requirement.The temperature of these components is controlled by forcing air over the surface.However, surfaces are cooled by natural convection for trouble-free and noiseless operation, as in the case of cooling of certain electronic equipment, in room heating or special heat exchange process.Coated finned surfaces are commonly employed for enhancement of heat dissipation by combined convection and radiation.The contribution of radiation from the hot surface to the ambient may account for more than 20% of the total heat dissipated.
Early experiments to estimate heat dissipation by free convection from parallel plate geometry has been undertaken by Elenbaas [1].Laminar natural convection from a vertical plate subjected to uniform surface heat flux has been presented by Sparrow and Gregg [2] using similarity technique.The effect of staggering of fins has been studied by Sobel et al. [3].The condition for optimum spacing to minimize temperature difference between the plate and the fluid has been presented by Levy [4].Using the Elenbaas correlation, Bar-cohen [5] analysed an array of longitudinal fins to determine optimal spacing and thickness for maximum heat dissipation.Bar-cohen and Jelinek [6] showed that the fin spacing should equal the fin thickness for optimum material of the fin.Numerical analyses with natural convection between vertical parallel plate configurations have been made by Leung et al. [7].Sunil and Sobhan [8] considered the effect of variable thermal conductivity on the average heat transfer coefficient for a given fin height.An increase in the base temperature of the fin resulted in an increase in the average heat transfer coefficient, the range of which is influenced by the fin material.Dayan et al. [9] studied the contribution of thermal radiation in the total fin array cooling capacity.They concluded that the inclusion of radiation influence optimal spacing between fins which is pronounced when the surface emissivity is low.When radiation becomes strong, owing to large surface emissivity, the fin spacing is weakly affected by the radiation component.They observed that the surface temperature has no significant influence on the optimal fin spacing and attributed to the fact that at higher temperature, buoyancy can still drive an effective convective flow through tighter channels.A long channel presents considerable flow resistance and therefore should be compensated by wider spacing between fins.A significant outcome of the investigation is that the optimal fin spacing lies in a very narrow range for a wide variety of array geometries, the channel length is identified as the most significant parameter.Studies are undertaken to determine the optimum spacing between fins, thickness of the fins for minimum mass, the effect of fin thermal conductivity, temperature distribution in fin, combined heat transfer coefficient on heat dissipation capability of a fin array system.In most of the analyses, the contribution of radiation is estimated indirectly as the difference between the combined heat loss and convection loss estimated with equations.The emissivity of the surface is required to validate the heat loss by radiation.Hence, an experimental setup is fabricated to determine the emissivity of chrome coated surface commonly used in fin arrays for cooling application.An equation considering convection and radiation for a fin of uniform cross section is deduced from the energy balance relation.The equation is solved subject to boundary conditions to obtain the temperature variation, local and average heat transfer coefficients for various operating conditions.The influence of emissivity, convection heat transfer, and ambient temperature on overall heat transfer coefficients is determined and compared with the experimental data undertaken with a fin array setup.
Experimental determination of emissivity
To determine the emissivity at different surface temperatures, an experimental setup consisting of a strip heater sandwiched between two aluminum plates of dimensions 100 x 200 x 4 mm is fabricated.The assembly is located in a wooden box with aluminum plates held in vertical orientation.The plate inside the box is supported with epoxy resin plate of 5 mm thick and rock wool is packed between the resin plate and the wooden box to obtain negligible heat loss from the rear.Feasibility to replace the uncoated surface with a black chrome coated surface is made.This enabled determination of the surface temperatures with the aid of thermocouples for both conditions.A control panel consisting of voltmeter, ammeter, temperature indicator, dimmerstat and thermocouple selector switch for obtaining pertinent information is provided for estimating the emissivity of the surfaces.The input power to the heater is varied to obtain plate temperature at steady state.The thermocouples located at the four corners and at the centre of the surface showed identical temperatures.The values are validated with the temperatures measured with IR thermometer with a deviation of r 1.0 0 C. Average Nusselt numbers are calculated using theoretical equations for the vertical isothermal flat plate available in literature for both the uncoated and black chrome coated surfaces.Using these values, the heat leaving the surface by convection and hence by radiation is evaluated.It is estimated from the calculations of heat loss by radiation, the emissivity of the coated surface to be 0.
Experiments with fin array setup
An enclosure to dissipate heat by natural convection generated by a pulsating electronic component is simulated by providing a heating element inside the enclosure with integral fins on one side as shown in Fig. 1 and closed with a cover on the other.Such enclosure/casing is generally used for locating electronic gadgets for operation in remote places and designed for cooling by passive means.The size of the enclosure for locating the components as per the design requirement is 530 x 360 x 140 mm with a threshold temperature of 120 0 C to be ensured for smooth functioning of the electronic components.The enclosure is designed to have 6 rows, each row consisting of 22 vertical square fins of 5 mm thick (t), 70 mm length (H) and protruding 70 mm from vertical base (L) with a fin spacing (s) of 10 mm and black coated.A gap width of 25 mm is provided between two successive rows.Three strip heaters connected in series each of 400 W maximum rating is located inside the enclosure at the base of the fins to simulate heat generation by electronic components as shown in Fig. 1.The rectangular enclosure is made of aluminum for ease of fabrication with the base unit being cast.The cast aluminum block is machined for obtaining the fin array.The enclosure is put into simulated operating conditions with the base and fin array in vertical orientation.The amount of heat supplied at the base is altered by varying the current to the heaters using a voltage regulator at the control panel.The input power is measured using calibrated digital ammeter and voltmeter at steady state.Precalibrated copper constantan thermocouples with an accuracy of 0.1 0 C are fixed to the enclosure, fin base and fin tip at different rows and connected to a digital temperature indicator.The temperature from these locations is recorded at different heat inputs.The relatively high thermal conductivity of aluminum facilitated the achievement of almost uniform temperature at the air-base interface of the fin array and the enclosure surface exposed to atmosphere.Experiments are undertaken with black chrome coated fin array system.
Analysis of fin considering radiation
One-dimensional steady heat conduction is assumed to be valid for the configuration of the fin shown in Fig. 2. To determine the differential equation that will yield the fin temperature as a function of x along L, an energy balance is made on a differential element of width dx , of uniform cross section area, Ht A .
The governing differential equation in non-dimensional form is obtained as range for different values of temperature ratio term \ and emissivity H of the test surface.The temperature distribution and the local heat transfer coefficient are evaluated using Eq.(2a) subject to the boundary conditions (2b).The average heat transfer evaluated from numerical results for a fin is used to estimate the heat flow from the fins.The equations presented by Yuncu and Kakac [11] are used for comparison.
Results and discussion
The temperature distribution in the fin and variation of local heat transfer coefficient for different values of radiation parameter R N and temperature ratio term \ is shown in Figs 3 and 4. Evidently with increase in R N , heat dissipation is higher and consequently lower wall temperatures and higher values of heat transfer coefficients can be expected.In Fig. 4 the influence of emissivity on local heat transfer coefficient is shown.The influence of H is pronounced at higher values of\ .An increase in the value of \ implies lower values of temperature difference between the wall and the ambient and consequent higher values of heat transfer coefficient for a given input Q .The influence of convection parameter, C N on the average heat transfer coefficient can be observed from Fig. 5. Increase in the value of C N implies greater heat loss by convection which is evident from the graph drawn between average heat transfer coefficients and R N as shown in Fig. 5.The estimated value of heat flow from the fin array at different operating temperatures is shown in Fig. 6 along with the experimental data.Q are compared with values from theory as shown in Fig. 7. Equations ( 3) to (6) and the data of Guvenc and Yuncu [10] presented by Yuncu and Kakac [11] is in good agreement compared with the numerical results as shown in Fig. 8, thus confirming the applicability of single fin analysis with radiation in the design of fin array system.The ratio of ambient temperature to the excess temperature, (difference between base and ambient) represented by the temperature ratio term \ is observed to be a significant parameter along with radiation term R N in the evaluation of heat transfer coefficient.c) A value of 0.7 for emissivity of the black chrome surface in the temperature range of 60 -110°C is found to correlate well with other authors and the experimental data obtained.d) The analysis of single fin with the inclusion of radiation can be used to estimate heat flow from a fin array.
DOI: 10
.1051/ C Owned by the authors, published by EDP Sciences, 7 in the temperature range of 60 -MATEC Web of Conferences 02018-p.2110 0 C. Hence this value is taken for comparison of the heat flow by radiation in the fin array experiments.
Figure 1 .
Figure 1.Schematic diagram of the experimental setup
Figure 3 .
Figure 3. Local temperature variation for
Figure 4 .Figure 5 . 6 .
Figure 4. Variation of local overall heat transfer different values of R N and \
Figure 7 . 8 .
Figure 7.Comparison of experimental data Figure 8.Comparison of experimental Nusselt with values estimated from theory with correlations from the literature It can be observed that the estimated values of heat flow increases with temperature which is in good
|
2018-12-08T14:57:03.987Z
|
2014-01-01T00:00:00.000
|
{
"year": 2014,
"sha1": "da4348b6d444b614d1c234c1f16bac96ebe50c98",
"oa_license": "CCBY",
"oa_url": "https://www.matec-conferences.org/articles/matecconf/pdf/2014/04/matecconf_icper2014_02018.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "da4348b6d444b614d1c234c1f16bac96ebe50c98",
"s2fieldsofstudy": [
"Engineering",
"Physics",
"Materials Science"
],
"extfieldsofstudy": [
"Chemistry"
]
}
|
233270312
|
pes2o/s2orc
|
v3-fos-license
|
Prevalence of overweight and obesity among adolescents in South Indian population
Prevalence of overweight and obesity among adolescents in South Indian population Gayathri D.1*, Syamily2, Kulandaivel M.3 DOI: https://doi.org/10.17511/ijmrr.2020.i06.05 1* Gayathri D., Assistant Professor, Department of Pediatrics, Sri Venkateshwaraa Medical College and Research center, Ariyur, Pondicherry, India. 2 Syamily, Post-Graduate, Department of Pediatrics, Sri Venkateshwaraa Medical College and Research center, Ariyur, Pondicherry, India. 3 Kulandaivel M., Professor, Department of Pediatrics, Sri Venkateshwaraa Medical College and Research center, Ariyur, Pondicherry, India.
Introduction
Childhood obesity is one of the global public health challenges of the 21st century, affecting every country in the world. Globally, in just 40 years the prevalence of obesity has raised more than 10-fold from 11 million to 124 million in school-age children and adolescents according to a 2016 estimate [1]. With modernization, overweight/obesity is also rapidly growing in many developing countries. The world today faces a double burden of malnutrition which includes both undernutrition and overweight. Due to the difficulty in the treatment of obesity in adults and many long-term side effects of childhood obesity, prevention of childhood obesity has now been recognized as a public health priority [5]. In India, the emergence of childhood obesity presents a cause for concern because of recent changes in lifestyle and economic development [6].
Obesity is now emerging as a common nutritional disorder. It usually results when food consumption is more than one's physiological needs [7]. National Nutrition Monitoring Bureau (NNMB) data observed high obesity levels in urban slums indicating that obesity is now affecting the urban poor also [8].
Complications of adult obesity are made worse if obesity begins in childhood [9].
In this study, the prevalence of overweight and obesity in 11-14 years of school children in a private school in Urban Pondicherry is estimated using BMI, Waist circumference (WC), and Waist Height Ratio (WHtR).
Methodology
Study setting: This study is a school-based crosssectional study done among 11-14 years old school children from a selected private school in urban Pondicherry.
Duration:
The study was carried out for 6months from June 2019 to December 2019. The study population was selected randomly and the sample size was calculated to be using Open Epi Version 3.01 (formula Z2pq/d2,) where "p" was considered the maximum of 13.04%, absolute precision of 5%, 95% confidence interval, and alpha error of 5%. [10] Inclusion criteria:
Exclusion criteria:
Study procedure: The sample population was selected from one private school in Pondicherry.
Students of age group 11-14 years were included in this study. The study was conducted after getting permission from school management through the proper channel. The study was carried out during break time after getting consent from participants.
A performa was used for collecting requisite information from students. The participants in the study were evenly distributed from 11 to 14years with an almost equal number of the study population in each age. In the present study, girls dominated the study population with 55.3% and boys were 44. 7%. In the present study, the strength of students was evenly distributed in each age so as it was reflected in the class of standard from 6th to 9th. In the present study, it was found that more students had both indoor and outdoor activity with 54.0%. In the present study, it was observed that less than 50% of the students were involved in the household activities though girls dominated in the study population strength. In the present study concerning the duration of time spent on TV and mobile were 1-2 hrs was more than 70%. In the present study, 94% of the students were non-vegetarians and only 6% were vegetarians. In the present study, it was found that the convention of three meals was dominant with 86% and only 14% with more than three meals. In the present study, it was found more than 70% of the students dine outside once or twice a week and only 19.3% of the students do not dine outside at all. In the present study, it was found less than 50% that is only 44% of the students found to eat unhealthy food outside, and more than 50% that is 56% do not eat unhealthy food. In the present study, the number of students who were overweight is 27 (18%), Obese is 9(6%) and normal BMI is 114 (76%). In the present study based on waist circumference, 22.7% were obese, and based on waist/height ratio was 18.7%.
Discussion
The results of the present study conducted in 11 to 14 years old school children in a private school of urban Pondicherry to estimate the prevalence of overweight and obesity shows that the prevalence of overweight is 18% (N=27) and Obesity is 6% (N=9) based on BMI. The prevalence of obesity in the present study is on par with the study conducted by Vishnu Prasad et al [11] in 2015 where obesity was 4.3% conducted in Pondicherry whereas the prevalence of overweight was 9.7%, found to be lesser than the present study. Similarly, a study conducted in Kerala showed that prevalence was 3 % for boys, 5.3 % for girls. Prevalence of obesity (7.5 %), overweight (21.9 %) were highest among high-income group and lowest (1.5 % and 2.5 %) among low-income group [12]. The present study also shows a higher prevalence rate of overweight/obesity among girls, as did a previous study done in Chennai [15]. In this study, overweight/obesity is seen slightly increasing with age and their level of study and is more common in boys than girls. From this study, overweight /obesity is seen commonly in students who spend more time in screen viewing such as watching television/mobile phones /computer, and in students who play Indoor games, and in students who don't spend time involved in household activities, and in students who eat non -vegetarian food, and in students who dine out more often, and in students who consume meal frequently, and in students who eat unhealthy snacks (pizza, burger, ice cream, shawarma) and aerated drinks. These were found to be the risk factors predisposing to overweight /obesity in school children
Limitations
Study population was less and only private school was selected which will have a bias in the nutritional status and socioeconomic of the study population. What does the study add to the existing knowledge?
The present study gives an impact of increasing incidence of overweight and obesity among children which will be a greater impact at a later age which will lead to a burden on the health care system with the increasing incidence of non-communicable disease. Thus changing life study also has contribution which has to have a modification for a healthier society.
Author's contribution
|
2021-01-07T09:07:32.797Z
|
2020-12-23T00:00:00.000
|
{
"year": 2020,
"sha1": "2b7b11b62f4bd44793671804fe65c4e384a9b32e",
"oa_license": "CCBY",
"oa_url": "https://ijmrr.medresearch.in/index.php/ijmrr/article/download/1226/2234",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "92906e9f454feae9e5745e55ab46ebff82ee8b10",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
}
|
85517966
|
pes2o/s2orc
|
v3-fos-license
|
On Neutron Star Mergers as the Source of r-process Enhanced Metal Poor Stars in the Milky Way
We model the history of Galactic r-process enrichment using high-redshift, high-resolution zoom cosmological simulations of a Milky Way (MW) type halo. We assume that all r-process sources are neutron star mergers (NSMs) with a power law delay time distribution. We model the time to mix pollutants at subgrid scales, which allows us to to better compute the properties of metal poor (MP) and carbon enhanced metal poor (CEMP) stars, along with statistics of their r-process enhanced subclasses. Our simulations underpredict the cumulative ratios of r-process enhanced MP and CEMP stars (MP-r, CEMP-r) over MP and CEMP stars by about one order of magnitude, even when the minimum coalescence time of the double neutron stars ($t_{\rm min}$) is set to 1 Myr. No r-process enhanced stars form if $t_{\rm min}=100$ Myr. Our results show that even when we adopt the r-process yield estimates observed in GW170817, NSMs by themselves can only explain the observed frequency of r-process enhanced stars if either the birth rate of double neutron stars per unit mass of stars is boosted to $\approx10^{-4} M_\odot^{-1}$ or the Europium yield of each NSM event is boosted to $\approx 10^{-4} M_{\odot}$.
INTRODUCTION
The recent aLIGO/aVirgo detection of gravitational waves from the merger of two neutron stars (GW170817; Abbott et al. 2017a), and the subsequent kilonova observed across the entire electromagnetic spectrum (Abbott et al. 2017b;Coulter et al. 2017) have confirmed that r -process elements are made in copious amounts in neutron star mergers (NSMs; Abbott et al. 2017c;Kasen et al. 2017). This discovery could be the sine qua non for showing that NSMs are the primary source of r -process elements in the Milky Way (Côté et al. 2018b).
On the other hand, while it is clear that NSMs are one of the sources of r -process enrichment, it remains an open question if they are the most important source. To address this question, several theoretical studies have modeled r -process enrichment of a Milky Way (MW) type halo and its ultra faint dwarf (UFD) satellites by NSMs. van de Voort et al. (2015a) carried out a zoom simulation of a MW type halo to z = 0 and concluded that NSM events can explain the observed [r -process /Fe] abundance ratios assuming 10 −2 M r -process mass is ejected into the ISM in each NSM event. Shen et al. (2015) studied the sites of r -process production by post-processing "Eris" zoom simulations, and found that r -process elements can be incorporated into stars at very early times, a result that is insensitive to modest variations in the delay distribution and merger rates. Separately, Safarzadeh & Scannapieco (2017) studied r -process enrichment in the context of UFDs and concluded that natal kicks can affect the r -process enhancement of subsequent stellar generations.
In each of these studies, it is observations of metal poor (MP) and carbon enhanced metal poor (CEMP) stars that are most constraining. Such stars encode a wealth of information about the formation of the first stars in the universe (Beers & Christlieb 2005;Frebel & Norris 2015), and similarly their r -process enhanced subclasses (MP-r and CEMP-r ), provide insight into the earliest r -process sources. Therefore, a successful theory for the source of the r -process should be able to explain the observed statistics of MP-r and CEMP-r stars in the MW's halo (Barklem et al. 2005;Abate et al. 2016).
In fact, the very existence of CEMP-r stars poses new challenges for the origin of r -process elements in the early universe. These stars are believed to form at high redshifts and in low mass halos where Population III (Pop. III) stars have polluted the halo with their carbon rich ejecta. In such low mass halos, for a CEMP-r star to form, an r -process source that acts on a timescale similar to Pop. III stars (i.e., ≈10 Myr) is needed .
Could the observed statistics of different classes of rprocess enhanced stars be explained by NSMs as the sole source of r -process in the early universe? In this study, we address this question, by carrying out a set of zoom cosmological simulations of a MW type halo and modeling NSMs as the sources of the r -process material. We improve on crucial aspects of previous such simulations on three fronts: (i) Modeling the coalescence timescales of double neutron stars (DNSs) as drawn from distributions motivated by population synthesis analyses (Fryer et al. 1998;Dominik et al. 2012;Behroozi et al. 2014). (ii) Identifying Pop. III stars by following the evolution of pristine gas in each simulation cell with a subgrid model of turbulent mixing that is crucial for properly identifying Pop. III stars whose ejecta are the precursor to the formation of CEMP stars (Sarmento et al. 2017;Naiman et al. 2018); (iii) Adopting a high dark matter particle mass resolution in order to resolve halos where the MP and CEMP stars form in the early universe.
The structure of this work is as follows: In §2 we describe our method in detail. In §3 we present our results and compare them to observations of MW halo stars. In §4 we discuss our results and conclusions. Throughout this paper, we adopt the Planck 2015 cosmological parameters (Planck Collaboration et al. 2016) where Ω M = 0.308, Ω Λ = 0.692, Ω b = 0.048 are total matter, vacuum, and baryonic densities, in units of the critical density ρ c , h = 0.678 is the Hubble constant in units of 100 km/s/Mpc, σ 8 = 0.82 is the variance of linear fluctuations on the 8 h −1 Mpc scale, n s = 0.968 is the tilt of the primordial power spectrum, and Y He = 0.24 is the primordial helium fraction.
METHOD
We used ramses (Teyssier 2002), a cosmological adaptive mesh refinement (AMR) code, which implements an unsplit second-order Godunov scheme for evolving the Euler equations. ramses variables are cell-centered and interpolated to the cell faces for flux calculations, these are then used by a Harten-Lax-van Leer-Contact Riemann solver (Toro et al. 1994).
We performed a set of zoom cosmological simulations of a MW type halo in order to address if NSMs can be considered the primary source of r -process enrichment in the early universe. We adopted three different minimum timescales for the coalescence of the DNSs: t min = 1, 10, and 100 Myrs. We also adopted three different energy for the NS merger event and run simulations: E NSM = 10 50 , 10 51 , and 10 52 ergs. In all cases, we stopped the simulations at z ≈ 8 − 9 when reionization is complete and the formation of the metal poor stars largely diminishes. The statistics of different classes of stars displaying a high abundance of r -process elements are then compared against MW's halo stars.
Simulation setup and Milky Way initial conditions
To initialize our simulations, we first ran a dark matter only simulation down to redshift zero in a periodic box with a comoving size of 50 Mpc h −1 . Initial conditions (ICs) were generated from music (Hahn & Abel 2011) for a Planck 2015 cosmology. The virial mass and radius of the halos are derived from the HOP halo finder (Eisenstein & Hut 1998). We used a halo mass cut of 1 − 2 × 10 12 M to ensure we only identified halos with a mass similar to the MW. We found 275 such halos within the desired mass range in our simulation box. We further refined our MW-type halo candidates by requiring them to be isolated systems. We estimated this based on the tidal isolation parameter (τ iso ) approach (Grand et al. 2017). The isolation parameter for each halo is computed as: where M 200 and R 200 are the virial mass and radius of the halo of interest, and M 200,i and r i are the virial mass of and distance to the i-th halo in the simulation, respectively. We computed τ iso,max for all halos with masses between 1 − 2 × 10 12 M , by searching within a distance of 10 Mpc h −1 centered on the location of each halo. The most isolated halos, i.e., those with lowest values of τ iso,max are our candidate MW-like halos. Next, we traced the dark matter (DM) particles within 2 × R 200 , for the top five candidates with the lowest values for τ iso,max , back to the starting redshift. The locations of these DM particles determine the Lagrangian enclosing box. The halo with the smallest box, now our zoom region, was chosen for our simulations to reduce the computational costs.
For the full hydrodynamic simulations, this zoom region is refined to a base level of 12, and 13 for two different sets of simulations corresponding to a dark matter particle mass of m DM ≈1.2 × 10 5 M and 1.4 × 10 4 M respectively. The zoom region has sides 4.4 × 4.2 × 6.4 comoving Mpc h −1 .
Star formation and feedback
The stellar particle mass in the simulation is m * = ρ th ∆x 3 min N where ∆x min is the best resolution cell size achievable and N is drawn from a Poisson distribution and the star formation efficiency * was set to 0.01 (Krumholz & Tan 2007) in our simulations. Setting L max , the maximum refinement in the simulation, to 24, together with n * = 17 H/cm 3 as the threshold for star formation in the cells results in a stellar particle mass of ≈ 50 M . This is massive enough to host the two supernovae needed to create a double neutron star. L max is the maximum refinement level in the simulation. A further limitation on star particle formulation is that no more than 90% of the cell's gas mass can be converted into stars.
In this study, we only modeled r -process elements production by NSMs and slow s-process channels were not modeled. Consequently, we did not model elements such as barium that have both r -process and s-process origin. Also, we did not model SN Ia because of their long average delay times of the order of 200-500 Myr (Raskin et al. 2009). Given the stellar particle mass (≈ 50M ), 50% of all such particles were assumed to host one core-collapse supernova (CCSN), assigned stochastically. Therefore, half of the stellar particles generated a CCSN ejecting a total mass of m sn = 10 M with a kinetic energy of E SN = 10 51 erg 10 Myr after the star was formed. The metallicity yield for each CCSN is set to η SN = 0.1, meaning one solar mass of metals is ejected in each CCSN event.
For each newly formed star particle, the ejected mass and energy were deposited into all cells whose centers are within 20 pc of the particle, and if the size of the cell containing the particle is greater than 20 pc, the energy and ejecta are deposed into the adjacent cells (Dubois & Teyssier 2008). Here the total mass of the ejecta is that of the stellar material plus an amount of the gas within the cell hosting the star particle (entrained gas) such that m ej = m sn + m ent , and m ent ≡ min(10 m sn , 0.25 ρ cell ∆x 3 ). Similarly, the mass in metals added to the simulation is taken to be 15% of the SN ejecta plus the metals in the entrained material, Z ej m ej = m ent Z + 0.15 m sn .
We separately tracked the metals generated by Pop III stars. These are dubbed 'primordial metals' and their mass is taken to be Z P,ej m ej = m ent Z P + 0.15 m sn P since the scalar P captures the mass fraction of the star particle that represents Pop III stars. SN feedback is the dominant driver of turbulence in our simulation and we have modeled the feedback to be purely in kinetic form. Lastly, we note that we do not model black hole formation and its feedback because its impact is expected to be negligible at this redshift (Scannapieco & Oh 2004;Scannapieco et al. 2005;Croton et al. 2006;Sijacki et al. 2007) 2.3. Cooling We used CLOUDY (Ferland et al. 1998) to model cooling at temperatures 10 4 K. Below this temperature we used Rosen & Bregman (1995) and allowed the gas to cool radiatively to 100 K. However, adiabatic cooling can result in gas falling below this temperature.
Additionally, we supplemented the cooling in the primordial gas with an H 2 cooling model based on Martin et al. (1996). We computed the cooling rate for each simulation cell based on its density, temperature, and H 2 fraction, f H2 . We set the primordial H 2 fraction according to Reed et al. (2005) with f H2 = 10 −6 .
Although we did not explicitly model radiative transfer, we modeled the Lyman-Werner flux from our star particles since these photons destroy H 2 . We used η LW = 10 4 photons per stellar baryon (Greif & Bromm 2006) and assumed optically thin gas throughout the simulation volume. The total number of stellar baryons, N * ,b , was computed each step by totaling the mass in star particles assuming a near-primordial composition (X=0.73, Y =0.25). The value of f H2 was then updated every simulation step: where We did not model the formation of H 2 since subsequent cooling is dominated by metals shortly after the first stars are formed. Lastly, we included a UV background model based on Haardt & Madau (1996) model.
Turbulent mixing
We made use of the work described in Sarmento et al. (2017) to generate and track new metallicity-related quantities for both the gas and star particles. Specifically, for each cell in the simulation we tracked the average primordial metallicity, Z P , which tracks the mass fraction of metals generated by Pop. III stars, and the pristine gas mass fraction, P , which models the fraction of unpolluted gas within each simulation cell with Z < Z crit . We briefly describe these scalars here, and a more thorough discussion is presented in Sarmento et al. (2017).
The primordial metallicity scalar, Z P , tracked the metallicity arising from Pop. III stars. This scalar allowed us to track the fraction of Pop. III SN ejecta in subsequent stellar populations. Yields from Pop. III stars are likely to have non-solar elemental abundance ratios (Heger & Woosley 2002;Umeda & Nomoto 2003;Ishigaki et al. 2014) and contribute to the unusual abundances patterns seen in the halo and UFD CEMP stars. Knowing both Z P and the overall metallicity of the gas, Z, allowed us to estimate the abundances of various elements, without having to track each one individually. The mass fractions of metals for selected elements used to model the normal and primordial metallicity of star particles in our simulation. Data for gas typical of 1 Gyr post BB provided by F. X. Timmes (2016). Data for 60M Pop. III SN provided by Heger (2016).
Similarly, the elemental abundance pattern for regular metals, is accounted for by a single scalar Z. By tracking these values for each star particle in the simulations, and convolving them in post-processing, we can explore the composition of our star particles through cosmic time, by using a variety of yield models for both Pop. III and Pop. II SNe. Our pristine mass fraction scalar, P , modeled the mass-fraction of gas with Z < Z crit in each simulation cell. Star formation took place at much smaller scales than the best resolution of typical cosmological simulations. Modeling P allowed us to follow the process of metal mixing at subgrid scales by quantifying the amount of pristine gas within each cell as a function time.
Most simulations instantaneously update cells' average metallicity once they are contaminated with SN ejecta. However, mixing pollutants typically takes several Eddy turnover times (Pan & Scannapieco 2010;Pan et al. 2013;Ritter et al. 2015). By tracking the evolution of P , we can model the formation of Pop. III stars in areas of the simulation that would normally be considered polluted above Z crit ; in effect increasing the chemical resolution of the simulation. Our model for the pristine fraction is based on accepted theoretical models (Pan & Scannapieco 2010) and has been calibrated against numerical simulations that model the dynamical time required to mix pollutants, due to SN stirring, in an astrophysical context (Pan et al. 2013).
As stellar particles are formed within a cell, they inherit Z, P and Z P , from the gas. This allowed us to calculate the fraction of stellar mass in a given star particle that represents metal-free stars, P , as well as the relative contributions that metals from Pop. III and Pop. II stars make to the stars that are enriched, Z P, /Z .
The ejecta composition for Pop. II and Pop. III stars are indicated in Table 1. Properly accounting for turbulent mixing enables us to identify the Pop. III stars whose stellar yields (carbon rich ejecta) are different than Pop. II stars and are responsible for the formation of CEMP stars. We express the abundance ratios of a star compared to that of the Sun as The solar abundance of Eu (log Eu ) is assumed to be 0.52 (Asplund et al. 2009) in the notation of log X = log(N X /N H )+12 where N X and N H are the number densities of element X and hydrogen, respectively. Likewise for carbon we adopt log C = 8.43 and for iron log Fe = 7.5. We note that subgrid turbulent mixing is only modeled for the metals and not the r -process ejecta. However, due to the high resolution of these simulations, we observe a negligible difference in metal enrichment due to the computation of subgrid turbulent mixing. Therefore, we assume the same holds for r -process material as it is treated as another scalar field similar to the metals in the code.
2.5. Modeling neutron star mergers We have modeled the formation of DNSs to take place for a tiny fraction (10 −3 ) of stellar particles chosen to go SNe. This corresponds to one DNS per 10 5 M of stars, that translates into a neutron star merger rate of ≈ 10 −4 /year at z = 0 (van de Voort et al. 2015b).
The particle chosen to host a DNS first undergoes two CCSN explosions, corresponding to the two progenitor stars. Afterwards, the particle was assigned a delay time distribution drawn from a power law t merge ∝ t −1 (e.g. Dominik et al. 2012;Mennekens & Vanbeveren 2016) with minimum of t min =1, 10, or 100 Myr (for three separate simulations) and maximum of t max =10 Gyr respectively. Note that this time is after the formation of the second neutron star in the binary. Once the merger time has elapsed we simulated the generation of r -process elements via a third explosion with E N SM = 10 51 erg in our fiducial run, while we explored E N SM = 10 50 , and 10 52 erg cases separately.
Europium yield
We set the fiducial value of the europium yield in the NSM events in our simulations based on the NS-NS merger detected by aLIGO/Virgo (GW170817). We adopted the estimated Eu yield of 1.5×10 −5 M for each NS merger event in our simulation. This number reflects the lanthanide-rich material ejected in the post-merger accretion disk outflow in a NS-NS merger event with the maximum value of 0.04 M (Cowperthwaite et al. 2017) multiplied by the abundances pattern of the solar r -process residuals (Côté et al. 2018b). The disk wind ejecta could be lanthanide-rich depending on the lifetime of the hyper-massive neutron star prior to collapsing into a black hole (Metzger & Fernández 2014;Siegel & Metzger 2017). We adopted this value since in order to answer the question of whether NSMs could by themselves explain the statistics of the r -process enhanced stars in the MW's halo, one needs to be conservative in the assigned yields.
Simulation parametrizations
We carried out five different simulations in this paper. We name the simulations as TxEy, where x stands for the minimum time for coalescence of the NSMs, and y stands for the energy for the NSM event in cgs unit. For example T10E51 stands for the simulation with minimum time for the merging of the NSMs set to 10 Myr with E NSM = 10 51 erg . The dark matter particle mass resolution is m p ≈ 1.2 × 10 5 M , and our stellar particle mass is fixed to be 50 M . We stopped the simulation at z ≈ 8. All the five simulations are summarized in Table 2.
RESULTS
We start by showing the overall star formation history of our MW type galaxy and its corresponding metallicity MDF z > 14 z > 12 z > 10 z > 9 z > 8.2 The characteristics of the simulations presented in this paper. We adopt the notation of TxEy to name each simulation, where x stands for the minimum time for coalescence of the NSMs, and y stands for the energy for the NSM event in cgs unit. The simulation with minimum time for merging of 1 Myr and E NSM = 10 51 erg is named T1E51. The simulation with minimum time for merging of the binaries set to 100 Myr is named T100E51. All these three simulations have dark matter particle mass of 1.2 × 10 5 M . The first column indicates the minimum timescale for merging of the DNSs in a power law distribution. The second column corresponds to the energy of the NSM event, and the last column is the stopping redshift of the simulation.
evolution. The top panel of Figure 1 shows the comoving star formation rate density (SFRD) of T1E51 simulation that we ran down to redshift z = 8.2. The cyclic SFR trend with overall increase towards lower redshift is characteristics of all simulations while the exact level of the SFR can vary depending on the overdensity which is re-simulated at higher resolution (Xu et al. 2016). The improved DM mass resolution in the Renaissance simulation allows it to track star formation in smaller over densities at earlier times. Hence we see a higher SFRD at early times for the normal case as compared to simulations with lower DM mass resolution. The Renaissance simulation has a comoving resolution of 19 pc as compared to our resolution of 5 pc, however their DM particle mass is 2.9 × 10 4 as compared to our 1.2 × 10 5 M . The bottom panel of Figure 1 shows the metallicity distribution function (MDF) for stars grouped based on their formation redshift. The MDF for stars formed at z > 14 is shown in blue and those that are formed at z > 8.2 shown in black. As expected the overall metallicity increases with time while the rate of change of the MDF slows down towards lower redshifts. These are all the stars in the simulation, not categorized per halo mass. Figure 2 shows rendered images of the dark matter, hydrogen, r -process , and metals in the T10E51 simulation at z ≈ 9. The fact that DNSs are born with delay time distributions causes some halos to be only enriched with metals and no r -process . We note that modeling DNSs' kicks will pronounce this feature that we present in an upcoming work.
Formation of CEMP stars
Modern surveys of the Galactic halo, as well as UFDs, indicate that CEMP stars (defined as those with [C/Fe]> 1 and [Fe/H]< −1) become more prevalent as overall metallicity decreases (Beers & Christlieb 2005). In fact, these surveys indicate that the fraction of CEMP stars is as high as 25% for stars with [Fe/H] < −2.0 (Komiya et al. 2007) and possibly as high as 40% for stars with [Fe/H] < −3.5 (Lucatello et al. 2006). Hansen et al. (2016) found that only about 17% ±9% of all the CEMP-no stars (that display no enhancement to s or r -process elements), exhibit binary orbits. Therefore, the dominant formation scenario of the CEMP stars is not through the mass transfer from a binary companion. Moreover, the discovery of damped Ly-α systems with enhanced carbon: Cooke et al. (2011Cooke et al. ( , 2012 suggests that these stars are born in halos that are pre-enriched by carbon (Sharma et al. 2018).
Left panel of Figure 3 shows the distribution of the stars in [C/Fe] − [Fe/H] plane. Each point is a star particle color coded given its age (i.e. the red shows the stars that formed at the highest redshift in the simulation). The adopted Fe and C yields from Pop. II and Pop. III SNe is listed in Table 1. Each star formation event traces a line with a negative slope in this plane. The oldest stars trace a line with more negative slope compared to the younger stars formed in the simulation. Since carbon is primarily generated from Pop. III stars, and Pop. III stars are formed in metal poor regions, naturally we see higher carbon enrichment towards lower metallicities. This is consistent with the observations of the CEMP stars where higher percentage of the stars show [C/Fe] > 1 towards lower metallicities. The location of the stars in [C/Fe] − [Fe/H] plane that defines a CEMP star is outlined with dashed blue line.
Right panel of Figure 3 shows the cumulative fraction of the MP stars that are CEMPs as a function of redshift. The black star indicates the observed cumulative ratio of ≈ 5% (Lee et al. 2013) which is based on the SDSS/SEGUE data and consistent with other groups (Frebel et al. 2006;Carollo et al. 2011;Placco et al. 2014). The orange hexagon is the updated analysis from Yoon et al. (2018). We note that in this plot we have adopted [C/Fe] > 0.7 for the definition for CEMP star to be consistent with the statistics presented in (Lee et al. 2013) and Yoon et al. (2018). The cumulative ratio of the CEMP stars to all the MP stars drops with redshift and reaches the observed ratio around z ∼ 8. can be seen the lines have a positive slope, indicating that those stars that are carbon enriched, and therefore born in halos enriched with Pop. III ejecta, also show higher [Eu/Fe] values. This is because Pop. III star formation results both in supernovae that eject large amounts of carbon into their surroundings, and DNSs that are strong sources of europium. This leads to the observed correlation for old stars. As can be seen, older stars are clustered towards the lower end of [C/Fe] and do not show the strong correlation between [Eu/Fe] and [C/Fe] as is seen for the young stars. This is due to the fact that the metal production dominates over that of carbon in more massive halos, and in general as the formation of Pop III stars cease, the new stars in the halo are born with lower [C/Fe]. In such systems, a single NSM event will leads to large dispersion along the [Eu/Fe] axis, as is observed by how the old stellar particles are clustered towards the lower end of [C/Fe].
Formation of metal poor r-process stars
In the middle panel, we also show the 5 stars in the ultra-faint dwarf galaxy Reticulum II ) that have measured abundances in both carbon and europium. The fact that there are practically no stars in our simulation that match Ret II abundances in both of these elements, potentially shows that the europium yield or NSM merger rate adopted as a fiducial value in our simulations needs to be boosted by a large factor. We return to this point in the next section.
The right panel of Figure 4 shows the distribution of the stellar particles in the [Fe/H]−[Eu/Fe] plane. The lo-cation of the stars in this plane is used to define different category of metal poor r -process enhanced stars.
Comparison with observations of r-process
enhanced metal poor stars Metal poor stars encode a wealth of information about the conditions in the early universe when these stars were formed (Frebel & Norris 2015). Such stars are divided into two categories, MP-r I and MP-r II , based on the r -process element abundance in their spectra. MPr I stars are metal poor stars that show mild enhancement of r -process elements, namely (0.3 < [Eu/Fe] < 1) and ([Fe/H] < −1.5). MP-r II stars are defined as those with higher levels of r -process abundance, namely (1 < [Eu/Fe]) and ([Fe/H] < −1.5). These two categories are outlined in the right panel of Figure 4. Based on the Hamburg/ESO r -process Enhanced Star survey (HERES; Barklem et al. 2005), out of 253 metal poor stars with −3.8 < [Fe/H] < −1.5, about 5% are MP-r II and another 15-20% are MP-r I stars. Separately, based on the SAGA database of stellar abundances, Abate et al. (2016) reported that out of 451 metal poor stars with Eu and Ba abundance, 26 (∼ 6%) are found to belong to MP-r II class.
The left panel of Figure 5 shows the cumulative fraction of all the MP stars that are MP-r I . This is cumulative in the sense that it indicates the fraction of all the MP stars formed by redshift z that belong to the MP-r I class. We show the results for T1E50 (solid-blue),T1E51 Each point is a star particle color coded given its age (i.e. the red points are stars formed at the highest redshift in the simulation). The adopted Fe and C yields from Pop. II and Pop. III SNe is listed in Table 1. Right panel: the cumulative ratio of CEMP stars to MP stars in the simulation as a function of redshift. The black line shows T1E51 simulation but the result is identical for all other simulations. The black star indicates the observed cumulative ratio of ≈ 5% (Lee et al. 2013) from the SDSS/SEGUE database, and the orange hexagon is the updated analysis from Yoon et al. (2018). We note that in this comparison we have adopted [C/Fe] > 0.7 for the definition for a CEMP star to be consistent with the statistics presented in (Lee et al. 2013) and Yoon et al. (2018).
MP-rI
Ret II . Each point is a star particle color coded given its age (i.e. the red shows the stars that formed at the highest redshift in the simulation). The adopted Fe and C yields from Pop. II and Pop. III SNe is listed in Table 1. The Eu yield per NS merger event is set to 1.5 × 10 −5 M based on the yield estimates from NS-NS merger detected by aLIGO/Virgo (GW170817). This reflects the lanthanide-rich material ejected in the wind ejecta from a NS-NS merger events. In the middle panel, we also show the five stars in Reticulum II whose abundances in both carbon and europium is measured.
(dashed-green), T1E52 (dot-dashed red), and T1E51 in solid black respectively. T100E51 simulation results in zero MP-r I stars and is not shown in the plots. The black dot shows the ratio of MP-r I over MP stars from observations of the MW's halo stars which is about 20% (Abate et al. 2016). Our simulations predict that the ratio is more than an order of magnitude below the level observed if the source of r -process is solely NSMs given the adopted rate of their formation and assigned r -process yield.
Our results should be thought of in the context of the imposed delay time distributions. When a minimum timescale of 1 Myr is considered for merging of the DNSs when they are formed, given the power law distribution, the median merging timescale of the DNSs is about 100 Myr. When the minimum timescale is changed to 0 or 10 Myr, the median timescale for merging changes from 3 to 300 Myr respectively. These median timescales matter in that they need to be compared to a typical phase of star formation that lasts in a given MW progenitor halo. Longer merging timescales relative to star formation timescale would lead to a NSM event that does not effectively enrich the medium such that r -process material gets recycled into the stars formed after the event. This is either because the star formation has ceased after the NSM event, or because a new phase of star formation occurs with a delay long enough to make the r -process material gets too diluted before getting recycled into the new stars. This is clearly shown in the simulation with a minimum merging timescale of 100 Myr, in that no MP-r I stars is born in that simulation.
The middle panel of Figure 5 shows the cumulative fraction of the MP stars that are categorized as MP-r II . The lines are the same as in the left panel. The ratio of MP-r II stars to MP stars predicted in the simulation is about an order of magnitude less than the observed level in the MW halo.
The right panel of Figure 5 shows the cumulative fraction of CEMP-r to all the CEMP stars. CEMP-r stars are defined as a subclass of CEMP stars with [Eu/Fe] > 1 and [Ba/Eu] < 0, and there are a handful of theories regarding their formation (Abate et al. 2016). The location of this category of stars is outlined with dashed brown lines in the middle panel of Figure 4. Out of 56 CEMP stars with barium and europium abundances, Abate et al. (2016) found 5 to be CEMP-r stars and 26 to be CEMPr /s stars. About a few percent of all the CEMP stars are CEMP-r in our simulation which is an order of magnitude less than the observed frequency of this class of stars.
The impact of the E NSM is understood in that lower energies tend to disperse the r -process material in a smaller volume and therefore the higher concentration of r -process leads to the formation of r -process enhanced stars. The impact of the E NSM is subdominant compared to the effect of the minimum time considered for the delay time distribution. Lower delay times (1 Myr, black line) leads to more NSM events in a halo, while large minimum times (as in T100E51 simulation) results in formation of no r -process enhanced stars.
In all three panels of Figure 5, the thin black dashed lines indicate an assumed NSM merger rate of ≈ 2 × 10 −4 M −1 or equivalently a europium yield of 3×10 −4 M which matches the statistics of the r -process MP stars. This boosted NSM merger rate, however, overpredicts the same statistics for the CEMP star. The mismatch between what Eu yield is required to match the observations, either shows we need more robust statistical data for the CEMP stars, or the r -process MP stars have been enriched by a separate source in addition to the NSMs.
While both core-collapse supernovae and neutron star mergers could explain the observed abundance of rprocess elements in the Galaxy (Cowan et al. 1991;Woosley et al. 1994;Rosswog et al. 1999Rosswog et al. , 2000Argast et al. 2004;Kuroda et al. 2008;Wanajo 2013;Wehmeyer et al. 2015), only r -process production in NSMs has been measured directly, and therefore we model the production of r -process through NSMs.
We performed cosmological zoom simulations of a MW type halo with dark matter particle mass resolution that can resolve halos of mass ∼ 10 7 − 10 8 M with spatial resolution of ∼ 5 pc. These high resolution zoom simulations are aimed at explaining the observed high frequency of r -process enriched stars in the MW's halo. We assume that the only r -process sources are NSMs that are assigned delay time distribution drawn from a power law, as predicted in population synthesis codes (Dominik et al. 2012). We assign europium yield to the NSM events representative of assuming 0.04 M wind ejecta with solar r -process pattern residual possible for GW170817 (Côté et al. 2018b).
We track the formation of MP and CEMP stars and their r -process enriched counterpart MP-r I , MP-r II and CEMP-r stars and we study the impact of two parameters in our study: (i) The minimum time scale for merging after the a DNS is formed, and (ii) the impact of E NSM on mixing the r -process material in a halo. Our simulations underpredict the observed ratio of r -process enhanced stars to their parent category by about an order of magnitude. We note that implementing the natal kicks would further reduce this enrichment level.
Our findings show that increasing the minimum timescale for merging of the DNSs results in a drop in the overall statistics of the r -process enhanced metal poor stars. This is due to the fact that a longer minimum timescale for merging of the DNSs leads to lower overall NSM events during a given timespan, while increasing the median merging timescale of the DNSs. For example, the median timescale for merging of DNSs is (3,100, 300) Myr if the minimum timescale is set to (0, 1, 10) Myr respectively. Similarly, the lower the energy of the NSM event, the r -process material experience less mixing in the halo and this actually leads to higher levels of r -process enhancement for the subsequent stars formed in the halo. The impact of the assumed E NSM is subdominant compared to the impact that the merging timescale has on the final level of r -process enrichment.
Given that with increasing the minimum time for merging from 1 Myr to 100 Myr, we are not able to form any MP-r I or CEMP-r stars, fast merging channels for the DNSs seems to be a requirement to make NSMs contribute modestly to r -process enrichment of the Galaxy at high redshifts.
In order to match the observed enrichment, we can think of two options: (i) adopting a higher Eu yield, and (ii) increasing the DNS birth rate. Regarding the first option, it is highly unlikely that higher Eu yields are possible from an NSM event. The adopted yield is estimated from GW170817 (Cowperthwaite et al. 2017;Côté et al. 2018b) with assuming a disk ejecta of mass 0.04 M . However, we note that in Naiman et al. (2018), the adopted yield is three times higher that what we have adopted in our study.
Regarding the second option, there is a large tension between the observed NS merger rates and the rates predicted from population synthesis models (Belczynski et al. 2017;Chruslinska et al. 2018). The value of one merger per 10 5 M of stars adopted in this work corresponds to MW rate of NSMs of R MW ≈ 10 −4 /year (van de Voort et al. 2015b). This rate is on the assumption that the minimum time scale for merging of , and T1E51 in solid black respectively. In all panels, the black dots indicate the observed ratio in the MW halo stars from Abate et al. (2016). The simulations severely under predict all the observed ratios by about an order of magnitude in the case of T1E51 simulation (black lines) and more so when the minimum time for merging is increased to 10 Myr. Moreover, we see that although lower explosive energy of the NSM event helps increase the fraction of r -process stars, this is subdominant when compared to the impact of minimum timescale for merging. The thin dashed lines in all three panels indicate the T1E51 result scaled by a factor of 20, translating into NSM merger rate of ≈ 2 × 10 −4 M −1 . This higher assumed higher NSM merger rate would match the observed frequency of the r -process MP stars but it overpredicts that of CEMP stars. the binaries is 30 Myr and the final stellar mass of the MW is about 3 × 10 10 M . This rate corresponds to almost the maximum rate predicted in population synthesis models with various variations, and about an order of magnitude above the observational estimates based on galactic double pulsars (Kim et al. 2015). However, translating this rate into local rate, we would be similar to the LIGO/Virgo merger rate estimate of 1540 +3200 −1220 Gpc −3 yr −1 (Abbott et al. 2017a). The NSM birth rate is subject to the details of the models implemented in the population synthesis codes (Belczynski et al. 2002(Belczynski et al. , 2008Dominik et al. 2012). In the standard model assumed in these models, which mostly concerns with the assumptions governing the common envelope (CE) phase during the formation of a compact binary system, we find that with the adopted Kroupa initial mass function (Kroupa & Weidner 2003) the DNS birth rate is about 2.5 per 10 5 M of stellar mass modeled. However, this birth rate can be boosted by a factor of three in variation of their standard model (for example in variation 15 of Dominik et al. 2012), which translate into NSM birth rates of about 6 times of what we have assumed in this study. While increasing the r -process yield will not impact the star formation history of our galaxy and simply will shift the stellar particles up and down in [Eu/Fe] or [Eu/H] axis, we could not treat birth rates similar to the yields. Higher birth rates will affect the iron yield from the CCSN as their number density would be affected. In other words, while in our simulation there is one DNS born per 1000 CCSNe, changing that to one DNS per 100 CCSNe will significantly impact the metallicity trends in our halos.
Based on our results higher yields or higher birth rates with fast merging timescales are needed to match the observations of the MW halo's metal poor r -process en-hanced stars. Similar conclusions has been reached based on chemical evolution studies of the Galaxy (Côté et al. 2018a) and been suggested a second source of r -process is needed in order to explain the observed trends in MW's disk. Moreover, the long delay between GW170817 and the star formation activity of its host galaxy, NGC 4993 (Levan et al. 2017), indicates that the merger rate at short delay times is different at high redshifts. Whether either of such choices would be consistent with the expected theoretical calculations of the r -process yield in NS merger events or the metallicity evolution at highest redshift remains to be explored. Upcoming data from the R-Process Alliance is projected to increase the detected number of MP-r II stars to 125, and over 500 new MPr I stars in the next several years (Hansen et al. 2018;Sakari et al. 2018). Moreover, upcoming data on the frequency of CEMP-r stars from high-resolution observations of a sample of approximately 200 bright CEMP stars by Rasmussen et al. (in prep) is likely to provide a much improved estimate of the frequencies of CEMP subclasses.
FUTURE WORK
We have not modeled the natal kicks of the DNSs in this work. However, their impact is expected to be significant specifically if natal kicks and delay times are not correlated for a DNS. DNSs are thought to be the precursors of the short gamma-ray bursts (sGRBs) and the location of sGRBs with respects to galaxies in the field can provide clues into the natal kick distribution of the DNSs. By studying host-less GRBs, Fong & Berger (2013) derived natal kick velocities in the range of 20-140 km s −1 with a median value around 60 km s −1 .
From a theoretical perspective, population synthesis analysis of DNSs (Fryer et al. 1998) where binary sys-tems with different initial masses for each star, initial eccentricity and orbital separation are simulated to merge and arrive at the outcome natal kick velocity after the second star goes off as a SNe. Such models arrive at natal kick distributions with an exponential profile and a median of 180 km s −1 (Behroozi et al. 2014). Safarzadeh & Côté (2017) studied the impact of DNS's natal kick on the Galactic r -process enrichment and concluded that almost 50% of all the NSMs that have occurred in the star formation history of a MW type system do not contribute to the r -process enrichment as the DNSs merge well outside the galaxy's effective radius.
For systems with shallow potential wells such as the Ultra Faint Dwarfs (UFDs, with halo mass of ∼ 10 7−9 M ; Simon et al. 2011), and their progenitors at high redshifts (Safarzadeh et al. 2018) small natal kicks on the order of 10-20 km s −1 can make DNS escape their hosts (Kelley et al. 2010;Safarzadeh & Côté 2017). This can severely impact the level of enrichment of the halos and should leave a clear mark on CEMP-r /CEMP ratio specifically since CEMP stars only formed early on before the halo is heavily enriched with metals, and would be almost impossible to make CEMP-r stars if the DNSs escape their host halo.
Another avenue to improve on the present work would be to model the s-process enrichment of the stars so that comparisons could be made with the statistics of the CEMP-s stars in the MW. For that we would need to model the formation of the AGB stars (Sharma et al. 2018). This work could be expanded to a whole suite of MW type halos in large simulations such as Auriga (Grand et al. 2017) and Caterpillar suite of simulations (Griffen et al. 2016) to achieve a reliable halo-to-halo scatter.
|
2019-03-25T18:05:19.000Z
|
2018-12-06T00:00:00.000
|
{
"year": 2019,
"sha1": "4a2d389db60fdbea0ff81b9300dd43ef5dee49ca",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1812.02779",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "4a2d389db60fdbea0ff81b9300dd43ef5dee49ca",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
234853771
|
pes2o/s2orc
|
v3-fos-license
|
Public contributions to early detection of new invasive pests
Early detection of new invasive pest incursions enables faster management responses and more successful outcomes. Formal surveillance programs—such as agency‐led pest detection surveys—are thus key components of domestic biosecurity programs for managing invasive species. Independent sources of pest detection, such as members of the public and farm operators, also contribute to early detection efforts, but their roles are less understood. To assess the relative contributions of different detection sources, we compiled a novel dataset comprising reported detections of new plant pests in the US from 2010 through 2018 and analyze when, where, how, and by whom pests were first detected. While accounting for uncertainties arising from data limitations, we find that agency‐led activities detected 32–56% of new pests, independent sources detected 27–60%, and research/extension detected 8–17%. We highlight the value of independent sources in detecting high impact pests, diverse pest types, and narrowly distributed pests—with contributions comparable with agency‐led surveys. However, in the US, independent sources detect a smaller proportion of new pests than in New Zealand. We suggest opportunities to further leverage independent pest detection sources, including by citizen science, landscaping contractors, and members of the public.
| INTRODUCTION
Invasive pests cause significant damages to economic and ecological systems, including to agriculture, biodiversity, and ecosystem service provisioning. Estimates of total annual damage and control costs for invasive species in the United States exceed $160 billion (2019 USD) (Pimentel, Zuniga, & Morrison, 2005). With increasing trade, changing climate, expanding source distributions of non-native pests, and increases in difficult-to-manage invasion pathways (such as ecommerce), rates of invasion and costs likely will continue to expand (Epanchin-Niell, McAusland, Liebhold, Mwebaze, & Springborn, 2021;Essl et al., 2015).
Efforts to avoid or reduce impacts from invasive pests include offshore and border prevention activities, as well as post-introduction efforts aimed at eradicating or controlling invasions. For invasive species that become established in a new region, detection is critical to initiating eradication and control responses. Earlier detection can lead to better outcomes and lower long-term costs, as smaller invasions generally are easier and less costly to control and adaptation measures can be initiated sooner (Epanchin-Niell, Brockerhoff, Kean, & Turner, 2014;Liebhold et al., 2016;Lodge et al., 2006;Pyšek & Richardson, 2010).
Early detection of pests can arise from multiple sources (Hester & Cacho, 2017). Active surveillance by government agencies for early detection of new pest incursions, which we refer to as agency detections, involves programs at various government levels. Agency detections often include high-risk site surveillance and commodity-and pest-specific surveys (e.g., Acosta et al., 2020;Arndt, Robinson, Baumgartner, & Burgman, 2020;USDA, 2019). A second broad source of detections is local extension specialists or researchers, who may encounter pests during their routine activities or more systematic survey efforts (e.g., Hester & Cacho, 2017); we term these research/extension detections. A third broad source consists of members of the public and farm and nursery operators, which we refer to as independent sources. These detections, particularly those by members of the public, are often viewed by agencies as fortuitous, because they contribute to achieving biosecurity objectives but are not the direct outcome of planned investments in surveys (Hester & Cacho, 2017).
Our classification of detection sources is similar to previous pest detection categorizations, which generally are described as spanning from active to passive (e.g., Hester & Cacho, 2017;Pocock, Roy, Fox, Ellis, & Botham, 2016;White, Marzano, Leahy, & Jones, 2019). However, we employ the term independent instead of passive, in recognition that detections by operators or members of the public may be either passive (i.e., unintentional) or active (e.g., resulting from routine private activities to monitor landscaping or crop health), while nonetheless occurring independently from agency survey efforts.
Independent sources contribute to detecting new incursions and to monitoring spread of established invasions. For example, the Asian longhorned beetle (Anoplophora glabripennis), a major threat to hardwood trees in the US, was first detected and reported in 1996 by a Brooklyn resident who observed damage to street trees (Haack, Law, Mastro, Ossenburgen, & Raimo, 1997). Subsequent incursions of this species were also first reported by the public, often in private residential gardens (EPPO, 2011;Haack, Hérard, Sun, & Turgeon, 2010;Straw, Fielding, Tilbury, Williams, & Inward, 2015). Other examples in the US include the Asian shore crab (Hemigrapsus sanguineus) detected by a college student on a field trip and more recently the Asian giant hornet (Vespa mandarinia) discovered by a citizen on their front porch (Baker, 2020;McDermott, 2004). Independent sources have been prevalent and critical to early invasive species detection and management in other countries as well, see examples from New Zealand and Australia in Bleach (2019) and Hester and Cacho (2017).
Recent research addressing the design and efficacy of surveillance programs for early detection of new pest incursions has largely addressed the questions of how much-and where-survey resources should be deployed to minimize long-term costs and damages (e.g., Epanchin-Niell, 2017;Epanchin-Niell et al., 2014;Epanchin-Niell, Haight, Berec, Kean, & Liebhold, 2012;Hauser & McCarthy, 2009;Holden, Nyrop, & Ellner, 2016;Horie, Haight, Homans, & Venette, 2013;Kaiser & Burnett, 2010;Moore & McCarthy, 2016;Yemshanov et al., 2015). While these studies have focused almost entirely on optimizing targeted agency surveillance (e.g., trapping), sensitivity analyses in Epanchin-Niell et al. (2014) demonstrate that the background rate of invasion detection-in the absence of active surveillance-is an important factor in determining optimal surveillance investment. Specifically, in contexts where pests are unlikely to be detected by other means, the benefits of agency surveillance are greater, all else equal. Therefore, a better understanding of background detection rates can lead to more effective active surveillance program design.
Despite increasing recognition of independent sources for early detection, quantitative understanding of the public's contribution to detection outcomes, as well as factors affecting detection likelihood, is limited. In New Zealand, Bleach (2018) finds that 63% of investigated detections of new pest incursions over one year were reported by the general public and an additional 10% had been reported by industry. This highlights the important role of independent detections in New Zealand, where residents have a legal mandate to report any pests they detect (Biosecurity Act 1993, Section 44). In Australia, Carnegie and Nahrung (2019) find that 36% of the 34 total forest pest detections over 20 years were by independent sources. The only similar study in the US, Looney, Murray, LaGasa, Hellman, and Passoa (2016), finds that 36% of new pest detections in Washington State over 24 years were by independent sources.
While studies have hypothesized factors likely to contribute to enhanced detection by independent sources, such as detections occurring on private lands or detections of highly conspicuous pests (Brown, van den Bosch, Parnell, & Denman, 2017;Cacho et al., 2010;Hester & Cacho, 2017;Looney et al., 2016;Pocock et al., 2016;Poland & Rassati, 2019), these have been largely untested. Understanding of the types of pests detected by various sources, where detections occur, and how quickly different sources detect pests is an informational gap that hinders effective accounting of independent sources of detection in biosecurity planning.
In this study, we develop and analyze a new dataset to explore detection sources responsible for intercepting and reporting new invasive pests in the United States. We classify detection sources based on the entities that detected and reported each new pest. For each pest detection, we also characterized the setting, geographic location, type of pest, anticipated impact of the pest, and estimated distribution of the pest within the United States when detected. We use these data to evaluate the relative contribution of each source in detecting new pests and to explore factors and circumstances influencing detection frequency across sources. In addition, we consider the potential that new pests could be detected even earlier through close monitoring of citizen science platforms such as iNaturalist. For this we compare the date of detection in our data with the first report date for each pest on iNaturalist to determine if any were reported earlier via that platform.
We provide several contributions relative to the current literature. We present the first national-level analysis of sources of new pest detections in the United States, and compare our findings with those from a nationalscale study in New Zealand (Bleach, 2018) and a statelevel analysis from Washington in the United States (Looney et al., 2016). We also categorize and evaluate contextual variables about detections that have been suggested as relevant to understanding pest detection activities but have not previously been tested. Specifically, we meet calls for data collection on pest characteristics (Hester & Cacho, 2017;Looney et al., 2016) and detection contexts (Carnegie & Nahrung, 2019) to scrutinize assumptions typically made about the attributes of different sources of detection (Froud, Oliver, Bingham, Flynn, & Rowswell, 2008). We also outline opportunities to better leverage independent pest detection sources and data documentation and analysis needs to further enhance understanding of pest detection sources.
| Data
Our analyses focus on first detections of pests that are new to the United States or to broad regions of the country and pose potential regulatory concern (e.g., because of ecological or economic consequences). Our data consist of detections of new, non-native pests in the United States over the nine-year period from January 2010 through December 2018 that triggered the preparation of a New Pest Advisory Group (NPAG) report by the US Department of Agriculture's Animal and Plant Health Inspection Service (USDA-APHIS, 2021). Detections of several species with frequent introduction and routine eradication efforts, such as the Asian gypsy moth (Lymantria dispar asiatica) and Mediterranean fruit fly (Ceratitis capitate) do not trigger new NPAG reports and are therefore excluded. We also exclude species that are not yet in the United States, were detected only during international port inspections, or have only preassessments rather than completed NPAG reports.
We collected data for 169 pest detections by reviewing and extracting relevant information from their respective NPAG reports (USDA-APHIS, personal communication, April 3, 2019). We code the following categorical variables for each pest based on information in the report: initial detection source (Table 1); the anticipated economic sectors affected by the pest (horticulture, forest, agriculture); expected economic and environmental impacts from pest establishment (high or limited economic impact, environmental impact reported or not); regulatory classification (actionable/ reportable, nonactionable/nonreportable); the distribution of the pest at the time of detection (occurrences in a single county, across multiple clustered counties, or across widespread counties); pest type; setting (e.g., nursery, farm, private residence). The final data set and details on how these variables were coded are provided in the Supporting Information (Tables S1-S6).
| Detection source identification
We define the initial detection of a pest as the initial detection event that led to the pest being reported to the NPAG. We classified detections according to 10 narrowly delineated sources ("narrow detection source categories"), then aggregated these into intermediate and broad groupings for analysis (Table 1).
NPAG reports did not always describe a detection source, and some were ambiguous as to whether the documented source was for the initial detection or for a detection associated with subsequent confirmation or identification activities. We consulted archived NPAG documents at APHIS offices to further resolve sources and ambiguity. We were able to confirm initial detection sources for 62% (n = 105) of the 169 pests detected in our focal time period ("confirmed" hereafter). For 31% of observations, a detection source was identified but uncertainty remained as to its primacy ("unconfirmed" hereafter). We exclude from our analyses observations with entirely unknown detection source pathways (n = 12; 7%).
| Analyses and hypotheses
Our analyses include statistical assessment of eight hypotheses (Table 2) regarding differences among detection sources, as well as a discussion of summary statistics. To transparently represent the uncertainty in the primacy of detection sources in our sample, we present two sets of results for each analysis, one using all pests with detection sources, both confirmed and unconfirmed initial detections (n = 157) and one using only data associated with confirmed initial detections (n = 105). All analyses were conducted in R (R Core Team, 2018). First, we examine the distribution of initial pest detections across sources (H1); we test differences in the proportion of detections by each broad detection source using a chi-squared goodness of fit test and associated Bonferroni corrected pairwise tests. We compare how sources of detection in our study compare with those in previous empirical studies analyzing initial pest detections in Washington State (US) and New Zealand (H2; Looney et al., 2016;Bleach, 2018;Tables S7 and S8). We hypothesize that the proportional contributions of each broad detection source for pests in our study are more similar to those observed in the US study than those in the New Zealand study (H2). We test differences in the proportions of detections by each broad detection source in our data compared with the detection sources in Looney et al. (2016) and Bleach (2018) with two-tailed, twoproportion z tests. Agency sources are funded and targeted to detect pests and thus may be the most effective detection sources (Froud et al., 2008;Keith & Spring, 2013). But, nonagency sources contribute substantially to detections (Carnegie & Nahrung, 2019;Looney et al., 2016).
H2. Relative to other pest detection studies, the proportional contributions of broad detection sources in our data are more similar in the United States than in New Zealand. (S; the general public accounts for a lower proportion of detections in the United States relative to New Zealand.) Biosecurity policies differ across countries. For example, New Zealanders have a legal obligation to report organisms "not normally seen or otherwise detected" (Biosecurity Act 1993, Section 44).
H3. Relative to other broad detection sources, a higher proportion of agency detections are of high-impact pests (i.e., pests with anticipated environmental or economic impact). (NS) Agency activities often target pests of economic or environmental concern.
H4. Relative to other broad detection sources, a higher proportion of agency detections will be of pests listed as reportable/actionable. (NS) Agency surveillance generally focuses on economically or environmentally important hosts and therefore may detect a greater proportion of pests deemed reportable/actionable.
H5. Relative to other broad detection sources, a higher proportion of agency detections will be of pests with limited distributions at the time of detection. (NS) Agency activities often target areas where pests have a higher expected probability of introduction (e.g., Epanchin-Niell et al., 2014;Froud et al., 2008;Poland & Rassati, 2019).
H6. Relative to other broad detection sources, a higher proportion of agency detections will be of high-value detections (i.e., detections of high-impact pests that are not yet widespread at detection). (NS) Agency activities often target areas where pests have a higher expected probability of introduction (e.g., Epanchin-Niell et al., 2014;Froud et al., 2008;Poland & Rassati, 2019) and pests of high economic or environmental concern.
H7. Relative to other intermediate detection sources, a higher proportion of trapping-based detections will be of insects. (S all ) Detection sources other than trapping detect a greater diversity of pest types, as traps generally are designed to target insects.
H8. Relative to other broad detection sources, a higher proportion of independent detections will occur in private settings, such as residential areas, farms, and nurseries. Next, we examine how detections of the following pest categories vary across sources: high-impact pests (H3), pests classified as actionable/reportable (H4), pests with limited distribution when detected (H5), and highimpact pests before they become widespread (H6). In addition, we look at the types of taxa detected across sources (H7) and the settings in which pest detections occurred across sources (H8). We assess H3-H4 by testing differences in proportions of each pest "type" (i.e., in terms of impact, actionable/reportable status, distribution, taxa, and setting) across sources using two-tailed Fisher's exact tests and associated Bonferroni adjusted pairwise tests.
Finally, to assess the role that iNaturalist, a popular citizen science platform, could play in augmenting existing surveillance systems, we investigate whether any pests in our dataset that were detected January 1, 2015, or later (n = 67) were reported on iNaturalist before the detection date in the NPAG report.
| Detection source contributions to pest detection (H1)
Across the 157 total detections in our dataset (confirmed and unconfirmed), agency sources account for 56%, independent sources for 27%, and research/extension sources for 17% (Figure 1; Table 3). A high number of agency detections are driven by general agency monitoring (27), industry inspections (24), and trapping surveys (24). All detections from commodity inspections, operators, and the general public were confirmed, but many detections from general agency monitoring, industry inspections, trapping surveys, and researcher/extension activities were unconfirmed (Figure 1a). Examining narrow source categories, members of the public (18) and operators (17) account for a substantial number of detections within the independent source category, while private contractors (6) and citizen science (1) detected fewer pests (Figure 1b). Focusing only on confirmed initial detections, agency sources account for 48%, research/ extension sources account for 12%, and independent sources account for 40% (Figure 1; Table 3).
Using pairwise chi-squared comparisons, we find that the agency source category detected a significantly higher number of pests than either the research/extension (p < .01) or independent (p < .01) source category when analyzing all detections (Table S9). However, when running pairwise chi-squared tests for confirmed detections only, the agency source had a significantly higher number of detections than the research/extension source (p < .01), but not the independent detection source (p = 1.0) ( Table S9). These tests support our hypothesis H1-that agency sources detect more pests than either extension/research or independent sources-for all detections, but not for the confirmed subset.
Independent detection sources may be underrepresented in our data because unconfirmed initial detections may include reports based on follow-up activities to an unidentified independent detection. We explore the implications of this in Table 3. The third row of Table 3 quantifies the maximum possible contribution of independent sources to pest detection, as well as the minimum contributions of agency sources, in our data, by presenting a scenario that attributes all unconfirmed detections to independent sources. Under this exploratory scenario, 60% and 32% of detections are attributed to independent and agency sources, respectively. These findings indicate that the actual contribution of independent sources to initial detection lies somewhere between 27% and 60% of detections, while agency sources were responsible for 32% to 56% of initial pest detections.
| Comparison with pest detection sources in other contexts (H2)
We compare our national-scale US pest detection data to two similar datasets (Figure 2). We find no statistical difference between our data and Washington state-level data from Looney et al. (2016) (Figure 2a) in terms of the relative contributions of independent, agency, and research/extension sources for all detections (p = .23, .37, .93, respectively) and for confirmed detections (p = .68, 1.00, .69, respectively) (Table S10). These results support hypothesis H2 that the relative contributions of the broad detection sources US-wide are similar to those in Washington state.
When comparing our data to New Zealand data from Bleach (2018) (Figure 2b), we find statistically significant differences in the proportional contributions of the general public, agency, and research/extension detection sources, when considering all detections (p < .01) and confirmed detections only (p < .01) (Table S11). Specifically, the agency detection source accounts for a higher proportion of detections in the United States than in New Zealand, and the general public accounts for a lower proportion of detections in the United States. These results support hypothesis H2 that there are significant differences in the relative contributions of the broad detection sources in the United States and New Zealand.
| Economic and environmental impact (H3)
When examining the economic sectors at risk from pests in our dataset, we find that the agricultural sector has the highest number of potential pests detected (n = 80), followed by the horticultural sector (n = 62). Twenty-five pests in our dataset affect multiple sectors. The forestry sector is affected by the least number of pests (n = 21).
The potential economic and environmental impacts of a new pest are major factors in determining the value of an initial detection. We found that 52% of pests are described in the NPAG reports as likely to be a major economic pest or to have environmental impact; we refer to these as high-impact pests. Another 18% are classified as having unknown potential impacts. The remaining 30% are classified as limited-impact pests (i.e., no indicated environmental impact and low anticipated economic impacts) (Figure 3a). The general public detected the highest number of high-impact pests across intermediate detection sources (n = 17). Contrary to our hypothesis H3, we find no differences in the proportion of highimpact pests detected by broad detection sources for all detections (p = .43) and for confirmed detections (p = .69) (Table S12). (Figure 3b). Contrary to our hypothesis H4, we find no difference in the proportion of pests that are actionable/ reportable across broad detection sources for all detections (Fisher's exact test, p = .84) and for confirmed detections (Fisher's exact test, p = .71) (Table S13).
| Pest geographic distribution (H5)
We examine the estimated geographic distribution of pests at their time of detection, dependent on their source of detection. General agency monitoring and the general public are the intermediate detection sources responsible for the highest percentages (21% each) of detected pests with limited distributions in their new range (Figure 3c). Contrary to our hypothesis H5, we find that relative to other broad detection sources, a higher proportion of detections by the independent source were of pests with a limited distribution when considering all detections (Fisher's exact test, p < .01) or confirmed detections (Fisher's exact test, p < .01) (Table S14).
| Detection value (H6)
We identified 62 detections as high-value, which we define as detections of pests whose distributions are not yet widespread at the time of detection and are high-impact. We identified an additional 26 potentially high-value detections, which we define as detections of pests whose distributions were not yet widespread when detected but whose potential impacts are unknown. General agency monitoring and the general public were responsible for a high number of high-value detections, with the general public accounting for a relatively greater proportion of high-value detections among confirmed initial detections (Figure 3d). However, we found no statistical difference in the proportion of high-value detections across our broad detection sources for all (Fisher's exact test, p = .79) and confirmed (Fisher's exact test, p = .92) detections (Table S15), thereby failing to support hypothesis H6.
| Pest characteristics (H7)
We find that most sources detect many different types of pests (Figure 3e). The majority of detections across nearly all sources were insects, the most commonly detected pest type in our data (n = 76). The general public detected pests representing all types except mollusks, of which there were only two detections. Confirming hypothesis H7, insects constitute a significantly higher proportion of detections by trapping surveys than detections by any other source for all detections (Fisher's exact test, p < .01), and by any source except research/extension for confirmed detections (Fisher's exact test, p < .01) (Table S16).
| Detection settings (H8)
In analyzing the settings in which pest detections by different sources occurred, several patterns are revealed. Detections at inspection sites arise largely from commodity inspections, and detections at research sites occur largely through the research/extension source. The general public is responsible for the highest number of detections in residential areas, and operators discovered the highest number of new pests in nurseries (Figure 3f). We find statistically significant differences in the proportion of detections in private settings across broad detection sources, for all detections (Fisher's exact test, p < .01) and for confirmed initial detections (Fisher's exact test, p < .01). Associated pairwise comparisons show that independent sources detected a significantly higher proportion of pests in private settings than agency sources, for all detections (Fisher's exact test, p = 1.3e−02) and for confirmed detections (Fisher's exact test, p < .01) (Table S17), confirming our hypothesis H8.
| iNaturalist detections
Of the 67 species in our dataset that were reported in 2015 or later, we find one case where a pest's presence in the United States was reported on iNaturalist prior to the detection event that led to the NPAG report. In that case, the NPAG process was triggered by the pest's detection in a trap in a residential area about 3 months after the initial iNaturalist report. The NPAG and iNaturalist detections occurred about 20 miles apart, and the species-an insect-was found to be distributed across several clustered counties at the time of the NPAG report.
| DISCUSSION
A primary goal of pest surveillance is to detect pests early in the invasion process when they are less widespread and less costly to control. While the importance of various detection sources for new pest introductions has been examined in several specific contexts (Bleach, 2018;Carnegie & Nahrung, 2019;Looney et al., 2016), they have not been examined at the US scale or compared across contexts. In this section, we discuss results and lessons from the first national-level analysis of detection sources for new pest incursions in the United States.
| Key findings
We find that between 32% and 56% of new pest incursions were initially detected and reported by agency sources, such as agency monitoring, surveillance trapping activities, and industry inspections. Between 27% and 60% were detected by independent sources, such as by residential landowners and farm and nursery operators. Between 8% and 17% of detections were detected initially by research or extension personnel. The wide range for each estimate is due to uncertainties in distinguishing initial detection sources from detections arising from follow-up surveillance activities. When comparing our results with similar data for Washington State (Looney et al., 2016), we do not find statistically significant differences in the relative contributions of each broad detection source. However, we find that independent detections play a larger role in New Zealand than in our US dataset. These findings may reflect New Zealand's significant national investment in public pest awareness and reporting, as well as its regulatory mandate for members of the public to report potential pests (Bleach, 2018;Biosecurity Act 1993, Section 44).
Even though independent source contributions to pest detection appear to be lower in the United States than in New Zealand, they are nonetheless substantial and likely provide important economic value. For example, independent sources detected at least 31% of high economic or environmental impact pests in our data. The general public's efficacy in detecting high-impact pests may arise from individuals' inclination to report or inquire about pests that appear to be causing harm, a hypothesis that cannot be assessed with our data.
Independent sources also detected the highest number and proportion of pests with limited distribution. Specifically, of the 62 detected pests with limited distribution, operators detected 11 (18%) and the general public detected 13 (21%). This finding was contrary to our expectation that pests might spread quite far before being noticed and reported by independent sources, as compared with agency sources, which may target areas with high anticipated introduction rates. An important caveat is that ascertaining the distribution of pests first detected by independent sources may be more difficult and uncertain relative to those detected by agency sources, particularly if general or targeted surveillance is not available to evaluate pests' wider potential extent. Thus, the likelihood of underestimating the distribution of pests detected by independent pathways may be greater and deserves additional study.
We further explored the value provided by detections across sources. The value of early detection generally increases with smaller distributions at detection and with higher anticipated impact if left uncontrolled (Epanchin-Niell, 2017). We defined high-value detections as those of pests that NPAG reports anticipated would cause high economic or environmental impacts and that were not yet widespread at the time of detection. We find similar proportions of high-value detections within each broad detection source, although agency sources detected a higher total number. We also find no difference across broad detection sources in the proportion of detected pests recommended for actionable/reportable regulatory status by USDA-APHIS, supporting that independent sources are similarly effective at detecting actionable pests.
Independent sources also detected a similar diversity of pest types as other broad sources, and the general public detected all pest types except mollusks (for which there were just 2 observations). Independent sources played a particularly prominent role in detecting pests in private settings, such as residential areas, which may be most accessible by independent sources. Detections by operators largely occurred in farm, orchard, and nursery settings, as expected, but a high proportion of pests detected by general agency monitoring and industry inspections also occurred in nurseries.
Of the 25 confirmed detections in our data by the public, 6 were by contractors (e.g., landscapers), 1 was through citizen science, and 18 were by the general public (e.g., residential landowners). Given the small percentage of the US population that are landscapers or similar types of contractors, 6 detections via this source appears substantial. Importantly, contractors may be particularly likely to encounter pests in their work, as they are often hired to address tree or landscaping damage.
| Opportunities to further leverage independent detection sources
Our study highlights the diverse sources that detect new pests and the range of contexts in which detections occur in the United States. Our findings support the importance of independent sources in the United States for detecting a diversity of high-impact pests. However, the general public appears to contribute a smaller proportion of new pest detections in the United States than in New Zealand, suggesting there may be opportunities to increase the sensitivity of these sources and augment their contribution to reducing risk. We highlight four key opportunities to increase the contributions by independent detection sources in the US.
| Leverage ethical and environmental attitudes to increase awareness and motivation among the general public
Alternative motivations for public reporting of invasive species (beyond private benefits, or legal obligations) include pro-environmental and ethical attitudes, as well as interest (Pocock et al., 2016;Rotman et al., 2012;. Motivations for reporting invasive species can depend on perceptions of invasive species and their impacts, which are complex and have not been sufficiently studied (Kapitza, Zimmermann, Martín-López, & von Wehrden, 2019;Shackleton et al., 2019). Influencing perceptions through targeted informational campaigns can increase awareness and motivation, but a solid understanding of stakeholder values regarding potential invasive species is needed to craft an effective social marketing campaign (Dayer et al., 2020). Strategic messaging about invasive species management, framed in terms of stakeholder values and identity and delivered by trusted sources, may be more effective for incentivizing reporting than generalized outreach. Similarly, some stakeholders may be better motivated by appeals to consider the potential economic ramifications of invasive pests.
| Enhance invasive pest reporting channels to agencies
Significant opportunity exists for expanding clear and lowburden channels for reporting detections to agencies. Such channels include hotlines (e.g., Bleach, 2018;Carnegie & Nahrung, 2019), websites like Recording Invasive Species Counts , and apps like the Invasive Alien Species in Europe mobile app (Tsiamis et al., 2017). Strategic public awareness campaigns that include clear reporting channels can increase the likelihood that the general public will report particular species (e.g., Cacho et al., 2012;Roy et al., 2015). Similarly, citizen science networks such as Wild Spotter (2019) and Invaders of Texas (Gallo & Waitt, 2011) both educate the public and provide reporting channels. "Gamification" of reporting channels also has been suggested to motivate reporting (August et al., 2015;Nov, Arazy, & Anderson, 2014;Roy, Pocock, et al., 2012). This model is being implemented in Australia, where the Invasive Species Council recently partnered with the observation app Questagame (Herald, 2019).
Increasing engagement for the purpose of invasive pest management could have auxiliary benefits as well -public participation in citizen science has been shown to promote knowledge diffusion, environmental policy engagement, and behavioral change (Johnson et al., 2014;Lawrence, 2009).
| Integrate existing citizen science observations into agency and other surveillance activities
Integrating existing online citizen science platforms, such as iNaturalist and iSpot, into agency surveillance processes and early detection survey activities by land managers and natural resource contractors could augment both early detection and distribution information (August et al., 2015;Larson et al., 2020;Pawson, Sullivan, & Grant, 2020). As use of these citizen science platforms grows, agencies have an opportunity to utilize these collaborative databases by monitoring for posts of potential new pest detections, as well as for information on spread and distribution (e.g., Pocock et al., 2016). Citizen science platforms have already contributed to specific detections of new pests in Britain (iSpot, 2009;Turner, 2009) and Alaska (iNaturalist, 2019). We found that among the pest detections since 2015 in our dataset, one of the species had been reported 3 months prior on iNaturalist than its first reported NPAG detection.
To leverage online citizen science platforms, additional investments in quality assurance may be necessary, as online identifications cannot be validated in a laboratory setting (Dickinson et al., 2010;Roy, Pocock, et al., 2012). In addition, Caley et al. (2020) find through statistical analysis of online citizen science reports in Australia that these sources are relatively sensitive to conspicuous species and insensitive to nondescript species. Therefore, investments in these channels are most likely to be cost-effective in the context of distinctive, highly visible pests.
| Incentivize contractors who manage landscapes to report potential invasive pests
Contractors managing landscapes and treating potential plant hosts are uniquely positioned to detect new pests. Our findings show that this group already contributes importantly to pest detection, and these occupations are well suited for training on invasive pest and damage identification. Promoting reporting among this group, perhaps through certifications or permitting and licensing requirements, might augment detection of novel pests at low cost. The Northwest Michigan Invasive Species Network's "Go Beyond Beauty" program is an example of certification in the context of invasive plants.
Further leveraging independent sources for detecting new pest incursions offers a means to expand detection efforts across far larger areas than can feasibly be targeted by regulatory and agency programs, and seemingly at comparatively low costs. However, this general form of surveillance is likely to contribute a higher proportion of reports that turn out to be species that are native, pose insignificant impacts, or are already known to authorities (e.g., Bleach, 2018). Reports of these insubstantial detections can pose processing and identification costs, which should be considered when designing efforts to increase independent detections.
| Research needs and opportunities to improve data quality
Additional information about the relative efficacy of different sources for detecting pests would be valuable for surveillance planning (e.g., Caley et al., 2020;Pocock et al., 2016). However, estimating the probability of detection is not possible with available data because we lack information about where and when pests were initially established and the distribution of ongoing surveillance activities (Keith & Spring, 2013). Therefore, we cannot estimate the delay until detection or the exposure across space and time of pests to different sources of detection. Importantly, the probability of a pest being detected at a location depends not only on its being present at that location, but also on a source encountering, noticing, and reporting it. Another challenge in determining relative source efficacy is that factors influencing the likelihood of detection by independent sources also may influence the likelihood of introduction (e.g., both may be higher in locations with more people and susceptible resources). Hence, locations with a greater number of detections are not necessarily areas with greater efficacy of detection, and vice versa.
Our study lacks data on false positives, which pose costs. A better understanding of the frequency of false positives and factors affecting their prevalence would facilitate evaluation of the trade-offs associated with efforts to increase independent detections and would enable more careful design of strategies to leverage independent and other detection sources (Hester & Cacho, 2017).
While our findings substantially expand understanding of various sources of new pest detection, their precision is limited by variation in the availability of information about detection contexts for pests in our dataset. Only 62% of detections could be fully characterized from available information, suggesting that significant opportunity exists to enhance detection documentation and future analyses at minimal additional cost. Existing data collection frameworks for new pest detections should be augmented to include specific identification of the initial detection source and circumstances of the detection, such as how the pest came to be noticed. This information could provide insight into a source's motives and which types of pests are more likely to be reported by certain stakeholders (Cacho et al., 2012;Carnegie & Nahrung, 2019).
A clear record of reporting pathways (i.e., how and to what entity the initial detection source reported the observation) would enable improved understanding of the roles of various entities in detection reports and allow for more robust analysis and program design. For example, we observe a significant proportion of detections occurring through local extension agents; knowing whether these originated with members of the public could provide valuable information for outreach resource allocation. Finally, background documentation on agency surveillance activities, especially whether they were implemented in response to specific reports, would allow detections to be traced to specific sources.
| CONCLUSIONS
Through compilation of a novel dataset, we have completed the first nationwide assessment of sources of new pest detections in the United States and empirically evaluated the role of contextual factors and species characteristics in detection of new pest introductions. Independent sources detected a wide diversity of pest types, including high-impact pests. Our findings support that independent sources play an important role in detection and complement agency surveillance activities, particularly in private settings. Lessons from our US case study can be applied in similar contexts, and our analytic framework can be used for other regions and data sources. Holistic consideration of the diverse sources of potential pest detection will facilitate the design of cost-effective surveillance programs, enhancing opportunities for early detection and rapid response to reduce impacts from new pest introductions.
ACKNOWLEDGMENTS
This paper was made possible, in part, by a Cooperative Agreement from the United States Department of Agriculture's Animal and Plant Health Inspection Service (APHIS). It may not necessarily express APHIS' views. This article also benefitted from knowledge exchange as part of the "Advancing behavioral models" pursuit supported by the National Socio-Environmental Synthesis Center (SESYNC) under funding received from the National Science Foundation DBI-1639145. The authors wish to thank our cooperators at USDA-APHIS-PPQ, including Alison Neeley and others, who provided invaluable input, assistance, and data access. We are grateful to Jessica Blakely for her contributions in the early stages of this research, Matt Muir for iNaturalist insights, and two anonymous reviewers for their helpful suggestions.
|
2021-05-21T16:57:01.312Z
|
2021-04-14T00:00:00.000
|
{
"year": 2021,
"sha1": "e1741456980b53b09f1ed660acd0f2cdf7fb2495",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/csp2.422",
"oa_status": "GOLD",
"pdf_src": "Wiley",
"pdf_hash": "868b87b7fcce5fd57ceed2ea66d12ede868b105e",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
236983311
|
pes2o/s2orc
|
v3-fos-license
|
Association of Dietary Cholesterol Intake With Risk of Gastric Cancer: A Systematic Review and Meta-Analysis of Observational Studies
Background: Many case–control studies have investigated the association between dietary cholesterol and gastric cancer, yielding inconsistent findings. We carried out a systematic review and meta-analysis of observational studies to assess the relationship between dietary cholesterol intake and gastric cancer among adults. Methods: PubMed, Scopus, and Google Scholar were systematically searched to identify articles that evaluated the association of dietary cholesterol with gastric cancer up to May 2021. Pooled odds ratio (ORs) and 95% confidence intervals (CIs) were computed using random-effects models. Dose–response analysis was used to explore the shape and strength of the association. Results: Fourteen case–control studies with 6,490 gastric cancer patients and 17,793 controls met our inclusion criteria. In the meta-analysis of the highest vs. the lowest dietary cholesterol categories, a significantly higher (~35%) risk of gastric cancer was observed in association with high cholesterol consumption (pooled OR: 1.35, 95% CI: 1.29–1.62, I2 = 68%; 95%CI: 45–81%). Subgroup analysis also showed this positive relationship in population-based case–control studies, those conducted on non-US countries, those with a higher number of cases and high-quality studies, those that collected dietary data via interviews, studies not adjusted for Helicobacter pylori infection, and studies where the body mass index was controlled. Besides, a non-linear dose–response association was also identified (P = 0.03). Conclusion: This study demonstrated that dietary cholesterol intake could significantly augment the risk of gastric cancer in case–control studies. Prospective cohort studies with large sample sizes and long durations of follow-up are required to verify our results.
INTRODUCTION
Gastric cancer (GC) represents the fifth most common cancer and the third leading cause of cancer deaths in males and females worldwide, with nearly one million new cases and 723,100 deaths from GC every year (1). Given the increasing prevalence of GC and its mortality, new strategies are necessary to minimize the disease burden. Helicobacter pylori infection, high alcohol consumption, obesity, smoking, and dietary factors are the main risk factors of GC (2,3). Numerous studies have shown the association between nutritional factors and GC (3,4). In fact, one meta-analysis found that the total dietary fat was positively associated with GC (5).
Cholesterol is a common nutrient in the human diet, with eggs, red meat, dairy products, fish, and poultry representing its major sources (6). It has been indicated that dietary cholesterol can increase serum cholesterol, low-density lipoprotein (LDL), and high-density lipoprotein (HDL) cholesterol concentrations (7). Hypercholesterolemia may be involved in cancer development via a rise in the level of inflammatory markers (8).
Some meta-analyses demonstrated that high dietary cholesterol intake increases the risk of ovarian, breast, pancreatic, and esophageal cancers (9)(10)(11)(12). However, the association between dietary cholesterol intake and GC risk remains controversial. Some case-control studies have indicated a positive relationship (13,14), while others showed no association (15,16). Based on our knowledge, there is no systematic review and meta-analysis to summarize the findings regarding dietary cholesterol intake and GC.
Therefore, considering the conflicting results and increasing incidence of GC worldwide, we carried out a systematic review and meta-analysis to provide a quantitative synthesis of the existing data on the association between dietary cholesterol intake and the risk of GC in adults. Furthermore, we aimed to assess the shape and strength of the dose-response association between dietary cholesterol intake and GC.
METHODS
The framework of this review was structured according to the Preferred Reporting Items for Systematic Review and Meta-Analysis (PRISMA) statement [(17); Supplementary Table 1].
Search Strategy
An advanced systematic search of PubMed, Scopus, and Google Scholar was performed without any restrictions (including language) using Medical Subject Heading (MeSH) and related keywords to discover relevant articles published until May 2021. The search terms were:[("cholesterol * " OR "dietary cholesterol" OR "cholesterol intake" OR "cholesterol consumption" OR "fat intake" OR "dietary fat") AND ("gastrointestinal cancer" OR "gastrointestinal carcinoma" OR "gastrointestinal neoplasm" OR "gastrointestinal adenocarcinoma" OR "gastrointestinal tumor" OR "gastric cancer" OR "gastric carcinoma" OR "gastric neoplasm" OR "gastric adenocarcinoma" OR "gastric tumor" OR "stomach cancer" OR "stomach carcinoma" OR "stomach neoplasm" OR "stomach adenocarcinoma" OR "stomach tumor")]. Besides, the reference lists of the relevant articles and reviews were manually inspected in order to complete the search. The protocol of this investigation was registered in the International Prospective Register of Systematic Reviews (PROSPERO) (CRD42021255008).
Inclusion Criteria
Studies with the following criteria were included: (1) a prospective cohort or case-control design; (2) participants were aged ≥18 years; (3) provided risk estimates, including relative risk (RR), hazard ratios (HRs), and odds ratios (ORs) with 95% confidence intervals (CIs) to evaluate the association between dietary cholesterol intake and GC. When several studies used one dataset, we selected the one with the greatest number of cases. Two independent authors reviewed articles according to the mentioned items. If they encountered any controversy, the principal investigator resolved the issue.
Exclusion Criteria
Unpublished papers, abstracts, ecological studies, reviews, letters, and comments were excluded. Furthermore, studies that considered another cancer along with GC and articles that used population-attributable risks to assess the association were removed.
Data Extraction
The following items were extracted from each included study: name of the first author, publication year, study location, study design, gender, age (mean/range), the total number of participants, cases, controls, median/range of cholesterol intake in each category, most adjusted RRs, HRs, or ORs and 95% CIs, dietary assessment method, outcome assessment approach, and adjustments. Two authors extracted the data independently, and the corresponding author resolved any disagreements.
Risk of Bias Assessment
The risk of bias for each study was determined using the Newcastle-Ottawa scale (18). Each study received an overall score between 0 and 9 according to the selection of case and control groups, comparability, and ascertainment of exposure and outcome. A total score of ≥7 was representative of a highquality study.
Statistical Methods
We used a random-effects model to compute summary risk estimates and 95% CIs for the associations between dietary cholesterol intake (highest vs. lowest categories) and GC. Between-study heterogeneity was assessed using the I 2 index and its CI (19). In terms of between-study heterogeneity, I 2 -values of 25-50%, 50-75%, and >75% were considered as low, moderate, and high heterogeneity, respectively (20). To discover potential sources of heterogeneity, subgroup and meta-regression analyses were conducted based on study design (population-based casecontrol studies, hospital-based case-control studies), number of cases, study quality, exposure reporting method, and adjustments (yes/no) for H. pylori infection, energy intake, and body mass index (BMI). In studies that reported the separate risk estimates for each gender, we first combined the risk estimates using a fixed model and then entered them into the final analysis.
We used the generalized least-squares trend estimation method to conduct a linear dose-response analysis (21,22). Estimated study-specific slope lines were combined to create an average slope using a random-effects model. Studies that reported the number of cases and controls, the mean/median intake of cholesterol, and the RRs with a 95% CI for at least three exposure categories were eligible for dose-response analysis. For studies that only reported the total number of cases and controls, we estimated the number of cases and controls in each category by dividing the total number by the number of categories.
In non-linear dose-response analysis, exposures were modeled using restricted cubic splines with three knots at percentiles of 10, 50, and 90% of the distribution. The correlation within each set of provided risk estimates was taken into account, and the study-specific estimates were combined using a onestage linear mixed-effects meta-analysis. The significance for non-linearity was determined by null hypothesis testing, where the coefficient of the second spline was considered equal to zero.
Publication bias was identified using Egger's linear regression test and funnel plot inspection (23). Sensitivity analysis was done using a random-effects model to assess the impact of each study on the overall risk estimate. This analysis was carried out by excluding each study and reanalyzing the data. All analyses were done using STATA version 16.0, and P < 0.05 was considered statistically significant for all tests.
Meta-Analysis
In total, 14 case-control studies (13)(14)(15)(16)(24)(25)(26)(27)(28)(29)(30)(31)(32)(33) were included in the analysis of the highest vs. the lowest dietary cholesterol intake and risk of GC. The meta-analysis indicated an increased risk of GC among participants who consumed the greatest amount of cholesterol compared to participants with the lowest cholesterol intake (pooled OR: 1.35, 95% CI: 1.29-1.62, I 2 = 68%; 95% CI: 45-81%) (Figure 2). Subgroup analysis and meta-regression failed to detect potential sources of heterogeneity. Furthermore, subgroup analysis indicated a positive relationship between dietary cholesterol and GC in population-based case-control studies, studies conducted in non-US countries, those with a higher number of GC patients (≥400), high-quality studies, those that collected dietary data through interviews, studies not adjusted for H. pylori infection, and studies where the BMI was controlled ( Table 2). In addition, sensitivity analysis did not show evidence for the impact of each study on the overall risk estimate (Supplementary Figure 1). No evidence of publication bias was observed through the Egger test (P = 0.83) and funnel plot (Supplementary Figure 2). Findings from linear dose-response analysis demonstrated that a 100 mg/d increment in cholesterol intake was not associated with the risk of GC (pooled OR: 1.05, 95% CI: 0.99-1.12, I 2 = 84%; 95% CI: 69-91%) (Figure 3). Sensitivity analysis was done to assess the effect of each study on the overall effect size (Supplementary Figure 3). Because the study of Toorang et al. had a major effect on the main analysis, we repeated the analysis once without it. Here, a marginally significant association was identified between a 100 mg/d increment in cholesterol intake and GC (pooled OR: 1.07, 95% CI: 1.00-1.15, I 2 = 65%; 95% CI: 22-85%). The study design and the number of cases were sources of heterogeneity in the subgroup analysis. Besides, a positive association was seen in population-based case-control studies, studies with higher cases, and studies adjusted for BMI ( Table 2). Moreover, there was no evidence of publication bias in the Egger test (P = 0.18) and funnel plot (Supplementary Figure 4).
A non-linear dose-response association was observed between dietary cholesterol intake and the risk of GC (P = 0.03; Figure 4).
DISCUSSION
In this systematic review and meta-analysis of 14 case-control studies, we found that higher intakes of dietary cholesterol were associated with a 35% greater risk of GC among adults. In addition, a non-linear dose-response relationship was observed. This study is the first systematic review and meta-analysis to examine the relationship between cholesterol intake and the risk of GC.
Cholesterol plays a vital role in maintaining cellular homeostasis in the body (34). Major dietary sources of cholesterol include red meat, processed meat, egg yolks, dairies, fish, butter, cheese, shrimp, and poultry (35). Considering that a highcholesterol diet might represent an unhealthy dietary pattern and lead to chronic diseases such as cancer and cardiovascular diseases (36,37), the relationship between dietary cholesterol and the risk of cancer has received much attention (11,12). This meta-analysis suggests that high dietary cholesterol intake may elevate the odds of GC. In line with our finding, one hospitalbased case-control study in Spain found a positive relationship between cholesterol consumption and GC (38). Jung et al. (39) also expressed that high serum cholesterol was linked to the incidence of GC. Furthermore, some meta-analyses found a significant positive association between dietary cholesterol intake and cancers of the ovaries, breasts, pancreas, esophagus, and lungs (9)(10)(11)(12)34).
In contrast, in two meta-analyses, intake of red meat and eggs (rich sources of cholesterol) was not associated with the risk of GC (40,41). Given that cholesterol is consumed in combination with other compounds such as salt, nitrates, multivitamins, minerals, and high-quality protein, the interaction between different nutrients prevents us from understanding the individual effect of cholesterol. We know that cholesterol is found in animal foods and high-cholesterol diets are poor sources of plant foods, including fruits and vegetables. Evidence indicates that people who consume high amounts of vegetables and fruits have a lower risk of GC (42,43). This effect might be due to the presence of many antioxidants (particularly vitamin C, vitamin E, and carotenoids) in fruits and vegetables, which possess anticarcinogenic properties (44). In addition, an inverse association was seen between serum cholesterol concentrations and the occurrence of GC in some cohort studies (45,46). The amount of cholesterol in cancer cells is higher than the normal cells, and cholesterol helps in cancer promotion (47). It is still ambiguous whether low serum cholesterol is a cause or effect in relation to GC, and this issue needs to be examined. Therefore, it is likely that dietary cholesterol increases the risk of cancer without augmenting blood cholesterol levels.
The inconsistencies among studies may be explained by variations in study design, geographic regions, adjustments, reporting of dietary data, quality of studies, and/or the number of cases. It has been shown that H. pylori infection, smoking, alcohol consumption, obesity, salt-rich diet, nitrites, and hot meals are the determinants of GC (48,49). High dietary cholesterol intake may take part in GC initiation or progression by supporting H. pylori infection. H. pylori infection leads to gastric atrophy and hypochlorhydria, which promote the colonization of acidintolerant bacteria (50) and elevate the occurrence of GC (51). Our findings indicated no association between dietary cholesterol intake and GC after adjusting our results for H. pylori infection. Furthermore, most of the included studies were adjusted for smoking and energy intake, which are the critical risk factors of GC. Besides, we found a significant positive association between cholesterol intake and GC in studies adjusted for BMI.
There are some potential mechanisms regarding the relationship between cholesterol and GC. Dietary cholesterol might play a role in cancer development via changes in lipid metabolism, which are related to cellular inflammation (52). An increase in total cholesterol and LDL as well as a decrease in HDL could induce the production of inflammatory biomarkers such as interleukin-6 and tumor necrosis factor-α (53).
This study possessed some strengths. First, linear and nonlinear dose-response analyses help us to reveal the shape and strength of probable association. Second, most of included studies applied an interview-administered questionnaire. Selfreported questionnaires for cholesterol intake assessment might inevitably lead to some misclassification of participants in terms of exposure. Third, most studies took into account a wide range of important confounding factors, including energy intake, smoking, alcohol consumption, and BMI. Finally, publication bias was not detected. Nonetheless, our study had some limitations. First, based on our knowledge, there was no cohort study to examine the association between dietary cholesterol and GC. Because case-control studies have diverse kinds of bias, including selection bias, recall bias, and measurement bias, the case-control nature of included studies prevented us from reaching a decisive conclusion. Second, some fundamental residual confounders such as H. pylori infection, dietary factors (salt, nitrates, etc.), and lipid-lowering medications (especially statin use) were ignored in the adjustments of most studies. Third, although we tried to detect the sources of heterogeneity among studies, we could not find them through subgroup analysis and meta-regression. Due to a limited number of studies, we could not perform subgroup analysis for other potential relevant factors. Finally, measurement errors are unavoidable in estimates of dietary cholesterol intake.
In conclusion, this review illustrated an association between high dietary cholesterol intake and GC development in casecontrol studies. This study suggests the importance of dietary cholesterol modification in the prevention of GC. Considering that all of the included studies had case-control designs prone to biases, these results warrant cohort investigations. Large, longduration, prospective cohort studies that consider the important dietary and non-dietary covariates are obligatory to achieve a comprehensive understanding of this matter.
AUTHOR CONTRIBUTIONS
PM and LG designed the work, extracted the data, analyzed the data, and critically reviewed the manuscript. LG wrote the first draft of the manuscript. Both authors contributed to the article and approved the submitted version.
|
2021-08-12T13:40:05.683Z
|
2021-08-12T00:00:00.000
|
{
"year": 2021,
"sha1": "0b794fb4109610ec59bc5e9d2eddf21b3d0c588f",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fnut.2021.722450/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "0b794fb4109610ec59bc5e9d2eddf21b3d0c588f",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
117768938
|
pes2o/s2orc
|
v3-fos-license
|
Quantum opening of the Coulomb gap in two dimensions
For a constant density of spinless fermions in strongly disordered two dimensional clusters, the energy level spacing between the ground state and the first excitation is studied for increasing system sizes. The average indicates a smooth opening of the gap when the Coulomb energy to Fermi energy ratio $r_s$ increases from 0 to 3, while the distribution exhibits a sharp Poisson-Wigner-like transition at $r_s \approx 1$. The results are related to the transition from Mott to Efros-Shklovskii hopping conductivity recently observed at a similar ratio $r_s$.
For a constant density of spinless fermions in strongly disordered two dimensional clusters, the energy level spacing between the ground state and the first excitation is studied for increasing system sizes. The average indicates a smooth opening of the gap when the Coulomb energy to Fermi energy ratio rs increases from 0 to 3, while the distribution exhibits a sharp Poisson-Wignerlike transition at rs ≈ 1. The results are related to the transition from Mott to Efros-Shklovskii hopping conductivity recently observed at a similar ratio rs.
PACS: 71.30.+h, 72.15.Rn In disordered insulators, a crossover [1] in the temperature dependence of the resistivity ρ(T ) is induced by Coulomb interactions from the Mott variable range hopping law (ρ(T ) = ρ M exp(T 0 /T ) 1/3 in dimension d = 2) to the Efros-Shklovskii behavior (ρ(T ) = ρ ES exp(T ES /T ) 1/2 ). The long range nature of the interactions leads to a dip in the single particle density of states, and the assumption that single electron hopping dominates the transport leads to this change in the resistivity. However, a single electron hop may reorganize the location of the other particles, inducing complex many particle excitations. This makes the Coulomb gap problem difficult and gives us the motivation to study the first quantum excitation above the ground state. For d = 2, the strength of Coulomb interactions is very often given in units of the Fermi energy by the dimensionless ratio r s . For strong disorder, the first excitation energy is expected to become larger when r s increases, and to decay as 1/L (instead of 1/L 2 for free electrons) when the system size L increases. But one cannot estimate the threshold r C s where the Coulomb gap opens without taking into account all complex of many body quantum processes. Considering spinless fermions in 2d strongly disordered clusters, we confirm from numerical calculations that the gap opens at a value r C s ≈ 1.2, and we point out that this is indeed for a similar value (r s ≈ 1.7) that a change in the hopping conductivity from Mott hopping to Coulomb gap behavior has been reported [2] for an electron gas created at a GaAs/AlGaAs heterostructure. For a statistical ensemble of clusters, we have calculated the many body states at the mean field level given by the Hartree-Fock approximation and we have added the effects of the residual interaction. Keeping constant the carrier density n e , and increasing the size L, the average gap between the ground state and the first excitation behaves as 1/L α , with α decreasing from 2 to 1 when r s increases from 0 to 3. Another remarkable effect of the interaction is to yield a sharp transition for the gap distribution: it tends to Poisson or to Wigner-like distributions for small or large r s respectively at the thermodynamic limit. A critical threshold r C s ≈ 1.2 is characterized by a scale invariant gap distribution, reminiscent of the one particle problem [3] at a mobility edge. However, it is only the distribution of the first spacing which exhibits such a transition, the distributions of the next spacings remain Poissonian and are essentially unchanged when r s varies. Eventually, we discuss the implications for the hopping conductivity and we confirm that the transition for the gap takes place at a smaller r s than r F s ≈ 4 − 5 where a change in the topology of the persistent currents carried by the ground state has been observed [4,5].
We consider a disordered square lattice with M = L 2 sites occupied by N spinless fermions. The Hamiltonian reads where c † i (c i ) creates (destroys) an electron in the site i, the hopping term t between nearest neighbours characterizes the kinetic energy, v i the site potentials taken at random inside the interval [−W/2, +W/2], n i = c † i c i is the occupation number at site i and U measures the strength of the Coulomb repulsion. The boundary conditions are periodic and r ij is the inter-particle distance for a 2d torus. If a * B =h 2 ǫ/(m * e 2 ), m * , ǫ, a and n s = N/(aL 2 ) denote respectively the effective Bohr radius, the effective mass, the dielectric constant, the lattice spacing and the carrier density, the factor r s is given by: since in our unitsh 2 /(2m * a 2 ) → t, e 2 /(ǫa) → U and n e = N/L 2 .
In this study, a large disorder to hopping ratio W/t = 15 is imposed for having Anderson localization and Poissonian spectral statistics for the one particle levels at r s = 0 when L ≥ 8. We study N = 4, 9 and 16 particles inside clusters of size L = 8, 12 and 16 respectively. This corresponds to a constant low carrier density n e = 1/16. A numerical study via exact diagonalization techniques for sparse matrices is possible only for small systems [4], and does not allow us to vary L for a constant density. We are obliged to look for an approximate solution of the problem, using the Hartree-Fock (HF) orbitals, and to control the validity of the approximations. One starts from the HF Hamiltonian where the two-body part is reduced to an effective single particle Hamiltonian [6][7][8] where ... stands for the expectation value with respect to the HF ground state, which has to be determined selfconsistently. For large values of the interaction and large system sizes the single-particle problem (3) is still nontrivial, since the self-consistent iteration can be trapped in metastable states. This limits our study to small r s and forbids us to study by this method charge crystallization discussed in [4] at a larger r W s ≈ 12 . The mean field HF results can be improved using a method [9,10] known as the configuration interaction method (CIM) in quantum chemistry [11]. Once a complete orthonormal basis of HF orbitals has been calculated (H HF |ψ α = ǫ α |ψ α with α = 1, 2, . . . , L 2 ), it is possible to build up a Slater determinants' basis for the many-body problem which can be truncated to the N H first Slater determinants, ordered by increasing energies. The two-body Hamiltonian can be written as with and d † α = j ψ α (j)c † j |0 . One gets the residual interaction subtracting Eq. 3 from Eq. 4. This keeps the twobody nature of the Coulomb interaction, and if N ≫ 2 it is still possible to take advantage of the sparsity of the matrix and to diagonalize it via the Lanczos algorithm.
We have first compared HF and CIM results. Labelling the levels by increasing energy and studying an ensemble of 10 4 samples, we have studied the first spacing ∆ 0 = E 1 − E 0 . The role of the residual interaction can be seen in Fig. 1. When r s > 1, the residual interaction reduces the mean gap, and slightly changes the distribution. The CIM results agree with the results given from exact diagonalization with an accuracy of the order 2% when one takes into account the N H = 10 3 first Slater determinants when r s = 5 and L = 8. This means that a basis spanning only 0.2% of the total Hilbert space is sufficient for studying the first excitations. For larger L, exact diagonalization is no longer possible, but one can look if the results vary when N H increases. In the worst case considered (L = 16, r s = 2.8) the accuracy in the first four spacings can be estimated of the order 5% when N H = 2 × 10 3 .
Therefore the CIM method allows to study low energy level statistics for r s < 3. However, its accuracy is not sufficient to determine a small change of the ground state energy when the boundary conditions are twisted (i.e. the persistent currents). We have calculated the first energy levels for different sizes L. The first average spacing < ∆ 0 > calculated for an ensemble of 10 4 samples is given in Fig. 2. It exhibits a power law decay as L increases, with an exponent α given in the insert. One finds for the first spacing that α linearly decreases from d = 2 to 1 when r s increases from 0 to 3. This proves a gradual opening of the mean Coulomb gap. The next mean spacings depend more weakly on r s , as shown in Fig. 2. For r s = 0, the distribution of the first spacing s = ∆ 0 / < ∆ 0 > becomes more and more close to the Poisson distribution P P (s) = exp(−s) when L increases, as it should be for an Anderson insulator. For a larger r s , the distribution seems to become close to the Wigner surmise P W (s) = (πs/2) exp(−πs 2 /4) characteristic of level repulsion in random matrix theory, as shown for r s = 2.8 and L = 16 for instance. To study how this P (s) goes from Poisson to a Wigner-like distribution when r s increases, we have calculated a parameter η which decreases from 1 to 0 when P (s) goes from Poisson to Wigner: η = var(P (s)) − var(P W (s)) var(P P (s)) − var(P W (s)) , where var(P (s)) denotes the variance of P (s), var(P P (s)) = 1 and var(P W (s)) = 0.273. In Fig.4, one can see that three curves η(r s ) characterizing the first spacing for L = 8, 12, 16 intersect at a critical value r C s ≈ 1.2. For r s < r C s the distribution tends to Poisson in the thermodynamic limit, while for r s > r C s it tends to a Wigner-like behavior [12]. At the threshold r C s , there is a size-independent intermediate distribution shown in the insert of Fig. 3, exhibiting level repulsion at small s followed by a exp(−as) decay at large s with a ≈ 1.52. This Poisson-Wigner transition characterizes only the first spacing, the distributions of the next spacings being quite different. The insert of Fig. 4 does not show an intersection for the parameter η calculated with the second spacing. The second excitation is less localized than the first one when r s = 0, since the one particle localization length weakly increases with energy. This is only for L = 16 that the distribution of the second spacing becomes close to Poisson without interaction, and a weak level repulsion occurs as r s increases. The observed transition, and the difference between the first spacing and the following ones is mainly an effect of the HF mean field. For the first spacing, the curves η calculated with the HF data are qualitatively the same. At the mean field level the first excitation is a particle-hole excitation starting from the ground state and requires an energy of the order U/L, with fluctuactions around this mean value. The second excitation is again a particle-hole excitation starting from the ground state. The energy spacing between the first and the second excited state is given by the difference of two uncorrelated particle-hole excitations and a Poissonian distribution follows naturally. For r s > r C s , the Gaussian-like HF distributions for ∆ 0 become more Wigner-like when the residual interaction is included. We point out that in metallic quantum dots also, the first excitation is statistically different from the others, as shown by numerical studies [13] within the HF approximation.
The existence of a critical r s value for the opening of the Coulomb gap can be understood similarly to [14]. The single particle density of states around the Fermi energy E F is given by [1] ρ(E) ≈ |E − E F |/U 2 and the gap size ∆ g = |E g − E F | can be estimated from the condition ρ(E g ) ≈ ρ, with ρ ≈ 1/W mean density of states for W ≫ t, obtaining ∆ g ≈ U 2 /W . According to Fermi golden rule, the inverse lifetime of a Slater determinant built from electrons localized at given sites is Γ t ≈ t 2 (1/W )(N/L 2 ), with N/(W L 2 ) density of states directly coupled by the hopping term of the Hamiltonian (1). Therefore at zero temperature quantum fluctuactions melt the Coulomb gap for Γ t ≈ ∆ g , giving r s ≈ r C s ≈ 1. We conclude that a crossover from Efros-Shklovskii to Mott hopping conductivity is expected not only increasing temperature but also increasing carrier density, as observed in [2].
To measure possible delocalization effects, we have calculated the number of occupied sites per particle ξ s = N/ i ρ 2 i where ρ i = Ψ 0 |n i |Ψ 0 is the charge density of the ground state at the site i. Around r s ≈ 1.2 and after ensemble average, the maximum increase of ξ s compared to r s = 0 is negligibly small (2%). These are mainly the distribution and the average value of the first excitation energy which exhibit noticeable effects. This matters for the hopping conductivity. The usual argument is to consider the length L(T ) where exp[−(2L/ξ(r s ) + ∆ 0 (r s )/kT )] is maximum with the localization length ξ(r s ) ≈ ξ(0). If one takes for ∆ 0 (r s ) its average value ≈ (A + Br s )/L α(rs) (see Fig. 1), one obtains for the hopping resistivity a smooth and continuous crossover from Mott to Efros-Shklovskii hopping, given by: where This prediction neglects the sharp transition in the distribution of ∆ 0 at r C s , which could be better included by considering a more typical value for ∆ 0 (r s ) than its average, for instance obtained from the value s b for which s b 0 p(s)ds = b, with b = 1/2 for instance. This will introduce a sharp discontinuity at r C s in T (r s ). In summary, we have analyzed the Coulomb gap statistics for spinless fermions in a strongly disordered squared lattice when r s < 3. On one hand, we have found a sharp interaction-induced transition at r C s ≈ 1.2, characterized by a scale invariant distribution. Around the critical point, the gap distribution tends to Poisson or to Wigner-like distributions respectively at the thermodynamic limit. This effect is present at the HF mean field level, the residual interaction weakly shifts r C s and improves the Wigner-like character of one of the limits. On the other hand, the exponent α characterizing the average gap smoothly decays from 2 to 1 in this range of r s values. The average gap is substantially reduced from its HF value by the residual interaction. We associate this transition to a crossover in the hopping resistivity inside an insulating phase.
Partial support from the TMR network "Phase Coherent Dynamics of Hybrid Nanostructures" of the European Union is gratefully acknowledged.
|
2019-04-14T02:19:02.129Z
|
1999-03-22T00:00:00.000
|
{
"year": 1999,
"sha1": "6e713e54ddee83a6f164e0d29a85879ee75d394b",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "10664b9ada9c4bb9e93af2a1e70f60f02cd0b0a8",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
246944395
|
pes2o/s2orc
|
v3-fos-license
|
The neuroprotective mechanism of sevoflurane in rats with traumatic brain injury via FGF2
Background Traumatic brain injury (TBI) is a kind of acquired brain injury, which is caused by external mechanical forces. Moreover, the neuroprotective role of sevoflurane (Sevo) has been identified in TBI. Therefore, this research was conducted to figure out the mechanism of Sevo in TBI via FGF2. Methods The key factors of neuroprotective effects of Sevo in TBI rats were predicted by bioinformatics analysis. A TBI model was induced on rats that then inhaled Sevo for 1 h and grouped via lentivirus injection. Modified Neurological Severity Score was adopted to evaluate neuronal damage in rats, followed by motor function and brain water content measurement. FGF2, EZH2, and HES1 expression in brain tissues was evaluated by immunofluorescence staining, and expression of related genes and autophagy factors by RT-qPCR and Western blot analysis. Methylation-specific PCR was performed to assess HES1 promoter methylation level, and ChIP assay to detect the enrichment of EZH2 in the HES1 promoter. Neuronal damage was assessed by cell immunofluorescence staining, and neuronal apoptosis by Nissl staining, TUNEL staining, and flow cytometry. Results Sevo diminished brain edema, improved neurological scores, and decreased neuronal apoptosis and autophagy in TBI rats. Sevo preconditioning could upregulate FGF2 that elevated EZH2 expression, and EZH2 bound to the HES1 promoter to downregulate HES1 in TBI rats. Also, FGF2 or EZH2 overexpression or HES silencing decreased brain edema, neurological deficits, and neuronal autophagy and apoptosis in Sevo-treated TBI rats. Conclusions Our results provided a novel insight to the neuroprotective mechanism of Sevo in TBI rats by downregulating HES1 via FGF2/EZH2 axis activation. Supplementary Information The online version contains supplementary material available at 10.1186/s12974-021-02348-z.
and hypoxemia can lead to secondary injury and further worse short-term and long-term outcomes [3]. Because of its high morbidity and long-term sequelae, TBI results in great elevation of health care expenditure costs every year [4]. Moreover, TBI has been documented to cause neurological deficits, behavioral alterations, and cognitive decline and impose a dramatic impact on patients [5]. However, despite advances in developing therapeutic strategies on TBI recovery, effective treatments for TBI recovery was lack currently [6]. Hence, there is ongoing need for exploration of molecular mechanism underlying TBI to figure out more effective treatment of TBI.
As a halogenated inhalational anesthetic, sevoflurane (Sevo) is approved by FDA for induction and maintenance of general anesthesia in adult and pediatric inpatients and outpatients undergoing surgery, which offers autonomic blockade, hypnosis, analgesia, amnesia, and akinesia during surgical and procedural interventions [7]. Furthermore, Sevo postconditioning has been identified to alleviate TBI by reducing neuronal apoptosis and promoting autophagy [8]. Interestingly, it was predicted in our study by microarray analysis that fibroblast growth factor 2 (FGF2) may be a key factor in the neuroprotective effect of Sevo on TBI. FGF2 (also named as basic fibroblast growth factor) is a 3-exon gene on human chromosome 4q26-27, which possesses low and high (22-, 22.5-, 24-, and 34-kDa) molecular weight isoforms and are translated from a single transcript by starting from alternative, in-frame start codons [9]. Importantly, a prior research exhibited that FGF2 could protect against blood-brain barrier damage in mice with TBI [10]. Intriguingly, another work elucidated that FGF2 can increase enhancer of zeste homolog 2 (EZH2) expression by activating KDM2B in bladder cancer cells [11]. More importantly, Sevo-upregulated EZH2 was capable of alleviating hypoxic-ischemic cerebral injury in neonatal rats [12]. In addition, it was documented that EZH2 was involved in the transient repression of hairy and enhancer of split 1 (HES1) in erythroid cells [13]. Notably, knockdown of HES1 was able to augment the spatial learning and memory capacity of adult mice with TBI [14].
In this context, we speculated that the FGF2/EZH2/ HES1 axis might be correlated with the neuroprotective effect of Sevo on TBI, and conducted a series of animal and cell experiments to verify this speculation.
Ethics statement
Animal experiments were approved by the Ethics Committee of the First Affiliated Hospital of Zhengzhou University and conducted strictly in line with the Guide for the Care and Use of Laboratory Animals published by the US National Institutes of Health. All efforts were made to minimize the number and suffering of the included animals.
Microarray analysis
Gene expression dataset related to Sevo-treated rat brain tissues, GSE141242, was retrieved from the Gene Expression Omnibus (GEO) database (the platform annotation file was GPL22388 [RTA-1_0] Affymetrix Rat Transcriptome Array 1.0 [transcript (gene) CSV version]), including three control samples (sham) and three treated samples (TBI models). Differential analysis was conducted using "limma" package of R language to screen differentially expressed genes (DEGs) with |log2FC|> 0.6 and p < 0.05 as screening criterion. R language "pheatmap" package was adopted to draw a heat map of DEG expression. In the GeneCards database (score ≥ 18), "TBI" was employed as a keyword to search for TBIrelated genes, which were then intersected with DEGs in Sevo-treated rat brain tissues using jvenn to figure out the central factors involved in the neuroprotective effects of Sevo on TBI in rats.
Rat TBI model construction and lentivirus treatment
Healthy adult male Sprague Dawley rats (weighing 200-220 g; aged 3-4 months; the Experimental Animal Center of Zhengzhou University) were housed at 20-24 °C with 40-60% of humidity, 12 h/day light, and free access with water and food.
The rats were randomly assigned into 13 groups (Additional file 1: Table 1) in accordance with the random number table, with 12 rats in each group. Postoperatively, they were subdivided into 1-, 3-, 7-, and 14-day groups, with three rats in each group. A modified Feeney's free-falling epidural percussion method was utilized to establish TBI model to induce brain injury in rats. Following anesthesia of rats by intraperitoneal injection of 3% pentobarbital sodium (50 mg/kg), the scalp was cut open at 2 mm behind the right coronal suture and 2 mm in the midline. A 5-mm hole was drilled in the skull, but the dura was intact. A 30-g hammer was thrown down from a height of 20 cm to cause craniocerebral injury (impact force = 600 g/cm). The bone holes were sealed with wax, and the scalp was sutured. The sham-operated rats underwent this surgical procedure without hammering. Each rat was placed in a 42 × 26 × 26 cm closed anesthesia box. Anesthesia holes at 1.5 cm diameter were on both sides of the box for gas input and removal.
At 30 min after model construction, pure oxygen was delivered into sham-operated and TBI rats, and after 60 min, it was removed. TBI rats treated with Sevo were given with 2.4% Sevo-containing oxygen for 60 min. After additional 15-min oxygen inhalation, the rats were removed from the box. The gas analyzer was used to monitor the concentration of Sevo, oxygen and carbon dioxide. Following 0.5-h Sevo exposure, the right ventricle of rats was injected with 600 nmol 3-methyladenine (3-MA, diluted in 0.9% normal saline to a final volume of 5 μL), an inhibitor of cell autophagy that specifically blocks autophagosome formation and is utilized to consolidate the role of the autophagic pathway in the adaptive neuroprotection following Sevo treatment. The remaining rats were injected with 0.9% normal saline as a control. The rats were euthanized at the end of the experiment, and the cortical tissue was collected and sectioned for subsequent experiments.
Two days before TBI model construction, the rats were immobilized under a stereotaxic frame (RWD, Shenzhen, China) and anesthetized with sodium pentobarbital (3%). The left ventricle (anterior-posterior-1.1 mm, mediallateral-1.5 mm, dorsal-ventral-4.0 mm from the bregma) of rats was injected with 4 μL of each lentivirus at a titer of 2 × 10 8 ifu/mL using a stereotaxic instrument (at a rate of 1 μL/min, with 5-10 min of retaining needle). After 2 days, model construction was performed [15].
Isolation and incubation of hippocampal neurons
Healthy female Sprague-Dawley rats on day 17 of pregnancy were routinely anesthetized, disinfected and dissected, after which the fetal rats were taken out and placed in Dulbecco's modified Eagle's medium (DMEM). Under an anatomic microscope, dissecting tweezers were used to remove the hippocampal tissue, with the meninges and superficial vessels completely discarded to obtain the hippocampal tissues. The hippocampal tissues were cut into pieces with ophthalmic scissors, detached with 0.125% trypsin, and reacted in a 37 °C water bath for 30 min, during which shaking 2-3 times was performed. When the digestive juice was turbid and did not contain tissue mass, the digestion was terminated by DMEM containing 10% fetal bovine serum. The tissue pieces were then centrifuged at 1500 r/min for 5 min, and the supernatant was discarded, followed by another centrifugation by addition of stop solution. Following supernatant removal, the pellet was added with stop solution again, and gently dispersed with a micropipette with very small inner diameter until a uniform cell suspension formed. Cell suspension was filtered with a 200 mesh filter to remove undigested tissue fragments, and the filtered single cell suspension was collected in a beaker. Next, the suspension was stained with trypan blue, counted on a hemocytometer, plated in a culture plate at a density of 1 × 10 5 cells/mL, and cultured in a 5% CO 2 incubator at 37 °C. After 8 h, the culture medium was renewed and cells continued to culture.
Culture of 293T cells
293T cells (the Cell Bank of the Type Culture Collection Committee of the Chinese Academy of Sciences) were seeded in a 25-cm 2 culture flask at a density of 1 × 10 5 cells/mL, and cultured in a 5% CO 2 incubator at 37 °C. When reaching 60-70% confluence, the cells were passaged for subsequent experiments.
Transduction of rat hippocampal neurons and establishment of in vitro TBI models
Cells in good conditions were screened under a microscope and transduced with lentivirus using Lipofectamine 2000 reagent for 72-96 h. Control cells were treated with 21% O 2 and 5% CO 2 for 3 h and Sevo cells were treated with 4.1% Sevo, 21% O 2 and 5% CO 2 for 3 h. The TBI cell model was established as previously described [16]. Briefly, a yellow pipette tip (1.5 mm in diameter) and a white pipette tip (1 mm in diameter) were used to mechanically cut the cultured rat hippocampal neurons to establish a TBI cell model injured by mechanical force.
Neurological function evaluation
The Modified Neurological Severity Score (mNSS) was adopted for evaluation of motion, sensation, reflex, muscle mass, abnormal behavior, vision, touch, and balance of rats, which was graded on a scale of 0-18. A normal state was illustrated by 0 score, and severe neurological deficits was indicated by 18 scores. The higher score indicated the greater neurological damage. Neurological function evaluation was conducted on day 1, 3, 7, and 14 following TBI modeling [8,17]. The wire grip test was performed to determine the motor function of rats [18]. Each rat was placed on a horizontal wire (80 cm in length and 7 mm in diameter) 45 cm away from the ground, and was allowed to crawl freely on the wire within 60 s. Bermpohl's method was applied to score motor function, ranging from 0 to 5 points, a total of six grades and evaluation was conducted on the 1, 3, 7 and 14 days after TBI. The lower score reflected the more severe impairment of motor function.
Measurement of brain water content
Following neurological function assessment, three rats in each group were euthanized at 1, 3, 7, and 14 days, respectively, under deep anesthesia, followed by removal of the brain. A precision electronic scale was used to weigh the brain tissue as "wet weight" and the weighed brain tissue was then put in a suitable container and in a 120 °C oven for about 48 h. During this process, the weight was measured several times until no changes occurred, which was the "dry weight". The brain water content was calculated using the following formula: brain water content = (wet weight − dry weight) × 100% [8,17].
Nissl staining
Cortical tissues surrounding the lesion areas were harvested, followed by formaldehyde fixing and preparation of 4-μm paraffin-embedded sections. The sections were dewaxed with xylene, rehydrated in a graded series of alcohol, and cultured with Nissl staining solution for 5 min. The damaged neurons shrank or contained vacuoles, and the normal neurons had a relatively large and full soma, with round and large nuclei. Neurons were counted under a microscope in five randomly selected visual fields.
Terminal deoxynucleotidyl transferase-mediated dUTP-biotin nick end labeling (TUNEL) staining
Cortical tissues were collected, fixed with 4% paraformaldehyde for 24 h and rinsed with running water. Next, the tissues were dehydrated in ascending series of alcohol (70%, 80%, 90%, 95% and 100%) for 30 min, cleared with xylene, embedded in paraffin and cut into 4-μm-thick sections. The sections were stained with TUNEL cell apoptosis kit (C1086, Beyotime Biotechnology Co., Shanghai, China) and apoptotic cells were observed under an inverted fluorescence microscope (HB050; Zeiss, Hamburg, Germany) in six randomly selected visual fields from each section. The percentage of the number of apoptotic cells to the total number of cells was the apoptosis rate.
Tissue immunofluorescence staining
Cortical tissues around the lesion area were fixed with formaldehyde and embedded in paraffin to prepare 3-μm-thick paraffin sections. The sections were deparaffinized by xylene, hydrated with gradient alcohol, and boiled in citric acid buffer for 5 min for antigen retrieval. The sections were then blocked with 5% goat serum and incubated overnight with primary antibodies against FGF2 and EZH2 (detailed information is shown in Additional file 1: Table 2). Following phosphate-buffered saline (PBS) washing, the sections were incubated with secondary antibody goat anti-rabbit IgG (1:200) conjugated by fluorescein isothiocyanate for 30 min. The nuclei were stained with 4′,6-diamidino-2-phenylindole (DAPI) and then observed under a fluorescence microscope [8,12,17].
Flow cytometry
A total of 1 × 10 6 cells were added with 80 μL RNase (0.03 g/L) and 150 μL propidium iodide (PI) (0.05 g/L, containing 0.03% Triton X-100), and reacted at 4 °C for 30 min in the dark. A flow cytometer was used to detect the percentage of cells in different cell cycles. CellQuest software was adopted to collect cells, at least 1 × 10 4 cells, and ModFit software was used to analyze the results, which were expressed as percentage [19].
Western blot analysis
The cortical tissues (30 mg) and cells surrounding the lesion area were lysed in lysis buffer, followed by quantification of the total protein concentration using a bicinchoninic acid kit. After the lysate was added to the reduced loading buffer, samples were prepared and boiled for 8 min. Next, 25 μg total protein was loaded per sample before protein separation by sodium dodecyl sulfate-polyacrylamide gel electrophoresis. The protein was electroblotted to a polyvinylidene fluoride membrane which was blocked in 5% skim milk at room temperature for 2 h. The membrane was probed with corresponding primary antibodies (Additional file 1: Table 2) at 4 °C overnight, and re-probed with horseradish peroxidaseconjugated secondary goat anti-rabbit antibody (1:5000; ab6721, Abcam) at room temperature on the next day. The protein bands were visualized by enhanced chemiluminescence. Image J software was applied for gray-scale quantification of protein bands with glyceraldehyde-3-phosphate dehydrogenase (GAPDH) as a normalizer.
Cell immunofluorescence analysis
The sterile glass slide was placed into a 24-well plate to make cover glasses, which were coated with polylysine. The cultured neurons were digested into a single cell suspension, seeded on the slide and cultured with cell culture medium in a cell incubator with constant temperature.
After 48 h, the cells were observed under a microscope. When reaching about 80% confluence, the cells were fixed with 3% paraformaldehyde in ice at room temperature for 15 min, blocked with 3% bovine serum albumin (BSA), and incubated in PBS for 1 h. Thereafter, the cells were probed with primary antibody to MAP2 (1:50, ab183830, Abcam) at 4 °C overnight. After washing with PBS, the cells were re-probed with fluorescein isothiocyanate-labeled secondary antibody (goat anti-rabbit, 1:100, A-10684, Invitrogen Inc., Carlsbad, CA, USA) in PBS in the dark for 1 h at room temperature. Next, the cover glass was added with a mounting tablet containing DAPI (1 μg/mL), then removed from the 24-well plate, buckled upside down on the cover glass and mounted. A laser confocal microscope was used to analyze the results.
Fluorescence staining of neuronal axons was conducted as previously described [20]. Hippocampal neurons at the logarithmic growth phase were selected for follow-up experiments. The neurons were cultured for 48 h, and fixed for immunofluorescence labeling. Axons (by Tau-1) and dendrites (by MAP2) were labeled, respectively. Immunofluorescence double-labeling: hippocampal neurons were cultured for 48 h, added with 4% paraformaldehyde and fixed at room temperature for 15-20 min, rinsed with PBS-0.1% Triton, then treated with PBS-0.5% Triton for 5-10 min, rinsed with PBS-0.1% Triton, blocked with 5% BSA for 1 h, incubated with mouse anti-MAP2 (1:200) or mouse anti tau-1 (1:200), respectively, overnight at 4 °C. After rinsing, fluorescent-labeled antibody II was added to the neurons and incubated at room temperature for 1 h, and then observed under confocal microscope. Image Proplus software was used to count the protrusion length. Firstly, the measured length unit was standardized according to the ruler (20 μm) provided in the picture, and then the length of each segment of neuronal axon was measured with the standardized measuring ruler. After the measured data were exported, the total axon length was calculated in excel table and the data were recorded. The axon length and number of 50 neurons in each group were counted, respectively, and then the data were statistically analyzed. Student's t test was used for comparison between the two groups, p < 0.05 represented significant difference. The average axon length of each group was calculated and a columnar analysis was conducted.
Reverse transcription quantitative polymerase chain reaction (RT-qPCR)
Total RNA was extracted from tissues or cells using TRIzol reagent (15596026, Invitrogen), followed by reverse transcription into cDNA following the manuals of a Pri-meScript RT reagent Kit (RR047A, Takara, Tokyo, Japan). The synthesized cDNA was subjected to RT-qPCR using the Fast SYBR Green PCR kit (Applied Biosystems, Carlsbad, CA, USA) on an ABI PRISM 7300 RT-PCR system (Applied Biosystems). Three replicates were set up for each well. The gene relative expression was calculated using the 2 −ΔΔCt method and standardized by GAPDH. The primer sequences are depicted in Additional file 1: Table 3.
Chromatin immunoprecipitation (ChIP) assay
National Center for Biotechnology Information (https:// www. ncbi. nlm. nih. gov/) and JASPAR database (http:// jaspar. gener eg. net/) were adopted to predict binding sites of EZH2 in the HES1 promoter. ChIP assay was performed as per the kit instructions (Millipore, Billerica, MA, USA) as well as previously reported experimental methods. Cells were cross-linked in 1% formaldehyde for 10 min at 37 °C and then sonicated to obtain chromatin fragments averaging 400-800 bp. A small portion of the sample was applied as input, while the rest was added with anti-EZH2 antibody (ab191250, 1:1000, Abcam) and IgG (Santa Cruz Biotechnology Inc., Santa Cruz, CA, USA) as negative control (NC) for overnight incubation at 4 °C. The immunocomplexes were precipitated by supplementing protein A immunomagnetic beads the following day in the metal bath at 68 °C for 2 h for de-crosslinking. DNA was extracted using phenol chloroform isopropanol and quantitatively analyzed by RT-qPCR. One of the primers was a specific promoter for HES1 and the other was specific to NC (kidney specific promoter: Tamm-Horsfall). The primer sequences are displayed in Additional file 1: Table 4.
Methylation-specific PCR (MSP)
DNA extraction: 3 days after modeling, the rats were euthanized to obtain brain tissues. DNA methylation treatment was performed using the EZ DNA Methylation-Gold ™ Kit (Zymo Research Corp., USA) and the required DNA was then isolated.
PCR amplification: agarose gel electrophoresis was used to verify the extracted product and DNA Methylation-Gold ™ Kit was applied to modify the product, followed by PCR amplification. The primers for HES1 gene methylation (Additional file 1: Table 5) and nonmethylation (Additional file 1: Table 4) were synthesized by referring to relevant literature. Methylation level of CpG island in HES1 promoter region were determined by MSP. PCR was performed in a total volume of 25 µL encompassing 80 ng DNA template with the following reaction conditions: predenaturation at 95 °C for 5 min, 35 cycles of denaturation at 95 °C for 30 s, annealing at 55 °C for 30 s, and extension at 72 °C for 30 s, and extension at 72 °C extension for 10 min. Nuclease-free water served as a NC. Samples of PCR products (172 bp for methylated and 175 bp for unmethylated) were visualized on a 2% agarose gel containing 5 mg/mL ethidium bromide, and analyzed and photographed under ultraviolet irradiation using ChemiDoc ™ MP imaging system (Bio-Rad Laboratories Inc., Hercules, CA, USA).
Dual luciferase reporter assay
Constructs of the 3′untranslated region (UTR) dual luciferase reporter vectors of HES1 and mutant plasmids of mutations in the binding sites of HES1 to EZH2 were made as follows: PmirGLO-HES1-wild type (WT) and PmirGLO-HES1-mutant type (MUT), respectively. The reporter plasmids were, respectively, co-transfected with overexpression (oe) EZH2 plasmid and oe NC plasmids into 293T cells that were lysed after 48 h and centrifuged at 12,000×g for 1 min with the supernatant harvested. Luciferase activity was detected on a Dual-Luciferase ® Reporter Assay System (E1910, Promega, Madison, WI, USA). The relative luciferase activity was calculated as the ratio of relative luciferase activity of firefly luciferase to that of renilla luciferase.
Statistical analysis
Measurement data were summarized as mean ± standard deviation. Statistical analysis for all data in the current study was conducted using SPSS 21.0 software (IBM Corp., Armonk, NY, USA), using a value of p < 0.05 as an equivalent of statistical significance. Data among multiple groups were compared by one-way analysis of variance (ANOVA), and neurological scores and brain water content of rats at different time points were compared by two-way ANOVA, followed by Tukey's post hoc test. Neurological function score and motor function score were grade data, and were assessed by nonparametric test.
Sevo alleviated neuronal damage and apoptosis to reduce TBI-induced neurological deficits in rats
In order to study the effect of Sevo on TBI, a TBI model was induced in rats that were then treated with Sevo. As depicted in Fig. 1A, TBI rats had higher mNSS and brain water content yet lower motor function score than sham-operated rats. In contrast, Sevo treatment reduced mNSS and brain water content while increasing motor function score. As reflected by Western blot analysis, brain-derived neurotrophic factor (BDNF) and NeuN expression was lower in the cortical tissue of TBI rats than in sham-operated rats, which was reversed by Sevo treatment (Fig. 1B, C; Additional file 2: Fig. S1A, B). Nissl staining results showed that compared with shamoperated rats, the cortical tissue of TBI rats showed severe neuronal damage, which was alleviated by Sevo treatment (Fig. 1D). TUNEL staining data further displayed that in contrast to sham-operated rats, cell apoptosis was higher in the cortical tissue of TBI rats, while it was decreased upon the treatment of Sevo (Fig. 1E). In addition, MAP2 was utilized as a DNA damage marker, and neuronal damage was detected by cell immunofluorescence staining, with the results showing augmented neuronal damage in TBI rats while Sevo resulted in alleviation (Fig. 1F). Moreover, axonal length was found to be significantly longer in TBI rats than in sham-operated rats, and was shorter in Sevo-treated TBI rats than in TBI rats (Fig. 1G). Based on flow cytometric data, neuron apoptosis was elevated in TBI rats versus sham-operated rats, which was negated by Sevo treatment (Fig. 1H). Conclusively, Sevo reduced neuronal damage and apoptosis, improving neurological scores in TBI rats.
Sevo promoted FGF2 expression and in turn attenuated neurological deficits and neuronal death caused by TBI in rats
To predict the key factors responsible for the neuroprotective effects of Sevo in TBI rats, we performed differential analysis of GSE141242 related to Sevo-treated rat brain tissues, which obtained 65 DEGs, including 41 high-expressed genes and 24 low-expressed genes ( Fig. 2A). Then, we plotted the expression heat map of the top 15 DEGs with the smallest p-value (Fig. 2B). Totally 307 TBI-related genes were obtained through GeneCards database, which was intersected with the top 15 DEGs with the smallest p-value, finally obtaining FGF2 (Fig. 2C). Therefore, we presumed that FGF2 might be a key factor in the neuroprotective effect of Sevo on TBI rats.
Gene microarray data analysis showed that Sevo enhanced FGF2 expression in TBI rats (Fig. 2D). Western blot analysis results further presented a decline in the FGF2 protein expression in the cortical tissue of the TBI rats while Sevo treatment resulted in an increase in the FGF2 protein expression (Fig. 2E; Additional file 2: Fig. S1C). The results of immunofluorescence assay and Western blot analysis showed that the FGF2 expression was increased in the cortical tissue of the oe FGF2-treated rats, but a contrary result was noted in the absence of FGF2 (Fig. 2F, G; Additional file 2: Fig. S1D). As documented in Fig. 2H, overexpression of FGF2 diminished mNSS and brain water content while increasing motor function score in Sevo-treated TBI rats, which was opposite after silencing of FGF2. Moreover, the results of RT-qPCR and Western blot analysis indicated that FGF2 overexpression enhanced the expression of BDNF and NeuN in the cortical tissue of Sevo-treated TBI rats, but FGF2 silencing resulted in opposite results (Fig. 2I, J; Additional file 2: Fig. S1E, F). Additionally, Nissl staining results exhibited that FGF2 overexpression reduced the degree of neuronal damage in the cortical tissue of Sevotreated TBI rats while FGF2 silencing enhanced the neuronal damage (Fig. 2K). TUNEL staining data presented a decline in the neuronal apoptosis in the cortical tissue of Sevo-treated TBI rats following FGF2 overexpression, which was opposite after silencing FGF2 (Fig. 2L). In addition, Western blot analysis showed that sh FGF2 treatment resulted in upregulation of autophagy-related genes (LC3-I, LC3-II, Beclin-1, and P62) but oe FGF2 treatment caused downregulation of these genes in Sevotreated TBI rats (Fig. 2M).
Moreover, the results of cell immunofluorescence staining revealed that FGF2 overexpression reduced the hippocampal neuron damage, which was aggravated following FGF2 silencing (Fig. 2N). The results in Fig. 2O suggested that axonal length was shortened by overexpressing FGF2, but lengthened by silencing FGF2. Flow cytometric data presented with reduction of neuronal apoptosis after overexpressing FGF2, the effect of which reversed by silencing of FGF2 (Fig. 2P). Collectively, Sevo pretreatment can upregulate FGF2, thus decreasing neuronal autophagy and apoptosis, as well as attenuating neurological deficits in TBI rats.
Sevo augmented EZH2 expression via FGF2 and then alleviated neurological deficits caused by TBI in rats
The aforesaid results suggested that Sevo could increase FGF2 expression in TBI rats to prevent TBI. Next, we further investigated the mechanism by which FGF2 exerted neuroprotective functions during TBI. Western blot analysis data revealed a reduction in the EZH2 Fig. S1G). In addition, overexpression of FGF2 promoted the EZH2 expression, which was negated following FGF2 silencing ( Fig. 3B; Additional file 2: Fig. S1H), suggesting that FGF2 can positively regulate the expression of EZH2. Based on the results of immunofluorescence staining (Fig. 3C) and Western blot analysis (Fig. 3D), sh EZH2 treatment contributed to a decline of EZH2 expression in the cortical tissue of Sevo-treated TBI rats, which was rescued by further oe FGF2 treatment.
Silencing EZH2 elevated mNSS and brain water content while reducing the motor function score in Sevo-treated TBI rats, which was neutralized by further overexpression of FGF2 (Fig. 3E). Furthermore, RT-qPCR and Western blot analysis results displayed that BDNF and NeuN expression was reduced upon silencing of EZH2 in the cortical tissue of Sevo-treated TBI rats, which was normalized by further overexpression of FGF2 (Fig. 3F). The results of Nissl and TUNEL staining documented that neuronal damage and apoptosis were augmented in the cortical tissue of Sevotreated TBI rats following silencing of EZH2, which was annulled after further overexpression of FGF2 (Fig. 3G, H).
Western blot analysis exhibited upregulation of LC3-I, LC3-II, Beclin-1, and P62 in Sevo-treated TBI rats with silencing of EZH2, which was nullified by additional overexpression of FGF2 (Fig. 3I). Moreover, flow cytometric analysis described an enhancement in the neuronal apoptosis in the presence of EZH2 silencing, the effect of which was abolished by overexpression of FGF2 (Fig. 3J). In summary, Sevo upregulated FGF2 expression to activate EZH2, thereby arresting neurological deficits in TBI rats.
Sevo inhibited HES1 expression by upregulating EZH2 and promoting HES1 promoter methylation
EZH2 has been documented to upregulate HES1 in erythroid cells [13]. To investigate how EZH2 orchestrated HES1 expression, RT-qPCR and Western blot analysis were conducted, the results of which depicted that compared with sham-operated rats, HES1 expression was obviously elevated in the cortical tissue of TBI rats while it was diminished in the cortical tissue TBI rats after Sevo treatment (Fig. 4A). To determine whether EZH2 suppressed HES1 expression by binding to the promoter of HES1, we first predicted the WT or MUT binding sites between EZH2 and HES1 promoter through JASPAR database (Fig. 4B). Additionally, dual luciferase reporter assay data displayed that overexpression of EZH2 decreased the luciferase activity of WT-HES1 without altering that of MUT-HES1 (Fig. 4C). ChIP experiment results showed that EZH2 was highly enriched on the promoter region of HES1 (Fig. 4D), confirming EZH2 binding to the promoter of HES1. Furthermore, the results of RT-qPCR and Western blot analysis suggested that overexpression of EZH2 notably reduced HES1 expression, but silencing of EZH2 led to an opposite result (Fig. 4E). MSP experimental results documented that there was hyper-methylation in the promoter of HES1 under Sevo treatment (Fig. 4F). As illustrated by RT-qPCR, Sevo treatment caused downregulation of HES1 in hippocampal neurons (Fig. 4G). In addition, RT-qPCR and Western blot analysis results exhibited that the decreased HES1 expression by Sevo treatment was reversed by treatment with 5-aza-2′deoxycytidine (5-aza-CdR; a methylation inhibitor) (Fig. 4H). To sum up, Sevo could upregulate EZH2 and promote HES1 promoter methylation, thus inhibiting the expression of HES1.
(See figure on next page.) Fig. 2 Sevo activates FGF2 to relieve neurological deficits and neuronal death in TBI rats. A Differential analysis of the GSE141242 dataset related to Sevo-treated rat brain tissues. B The expression heat map of the top 15 DEGs with the smallest p-value. C Intersection of the top 15 DEGs with the smallest p-value with TBI-related genes obtained through GeneCards database. D FGF2 expression in TBI rats after Sevo treatment analyzed by gene microarray data. E Western blot analysis of FGF2 expression in the cortical tissue of sham-operated, TBI, or Sevo-treated TBI rats. Sevo-treated TBI rats were treated with sh NC, sh FGF2, oe NC, or oe FGF2. F Immunofluorescence analysis of FGF2 expression in the cortical tissue of TBI rats (scale bar: 25 μm). G Western blot analysis of FGF2 expression in the cortical tissue of TBI rats. H Neurological function evaluation by mNSS, brain water content measurement and motor function score in TBI rats. I The expression of BDNF determined by Western blot analysis and RT-qPCR in the cortical tissue of TBI rats. J RT-qPCR detection and Western blot analysis of the expression of NeuN in the cortical tissue of TBI rats. K Nissl staining of the hippocampal neuronal damage in the cortical tissue of TBI rats. L TUNEL-positive cells in the cortical tissue of TBI rats. M Western blot analysis of protein expression of autophagy-related genes (LC3-I, LC3-II, Beclin-1, and P62) in the cortical tissue of TBI rats. TBI hippocampal neurons were treated with oe NC, sh NC, oe FGF2 or sh FGF2. N Neuronal damage assessed by cell immunofluorescence assay. O Axonal length measurement of hippocampal neurons (After primary hippocampal neurons were cultured for 48 h, Tau-1 was used to specifically identify axons, with red fluorescent markers in the figure; MAP2 was used to specifically identify dendrites, with green fluorescent markers, with a scale of 25 μm). P The apoptosis of hippocampal neurons measured by flow cytometry. In panel E-M, n = 12 for rats upon each treatment. *p < 0.05 vs. sham-operated rats, Sevo-treated TBI rats or hippocampal neurons treated with oe NC; # p < 0.05 vs. TBI rats, Sevo-treated TBI rats or hippocampal neurons treated with sh NC. Cell experiments were conducted three times independently
Sevo depressed neurological injury induced by TBI in rats by downregulating HES1 via activation of FGF2/EZH2 axis
The abovementioned results have reported that Sevo could promote EZH2 expression via FGF2, thereby exerting neuroprotective functions during TBI by downregulating HES1. To further investigate the effects of Sevo on neurological injury in TBI rats by mediating FGF2/EZH2/HES1 axis, Sevo-treated TBI rats were randomly assigned into four groups via different lentivirus injection. As displayed in Fig. 5A, mNSS and brain water content were decreased while the motor function score was increased by treatment with sh NC + oe EZH2 in Sevo-treated TBI rats, which was abrogated by treatment with sh FGF2 + oe NC. Relative to treatment with sh NC + oe EZH2, treatment with sh FGF2 + oe EZH2 led to higher mNSS and brain water content and lower motor function score. The results of Western blot analysis and immunofluorescence staining documented reduction of HES1 expression after treatment with sh NC + oe EZH2 in the cortical tissue of Sevo-treated TBI rats, which was counteracted by sh FGF2 + oe NC. Additionally, sh FGF2 + oe EZH2 induced higher HES1 expression that sh NC + oe EZH2 (Fig. 5B, C). As manifested by RT-qPCR and Western blot analysis, treatment with sh NC + oe EZH2 led to an increase of BDNF and NeuN expression in the cortical tissue of Sevo-treated TBI rats, whereas treatment with sh FGF2 + oe NC restored these trends. However, lower BDNF and NeuN expression was noted in the presence of sh FGF2 + oe EZH2 than treatment with sh NC + oe EZH2 (Fig. 5D, E). The Nissl and TUNEL staining results (Fig. 5F, G) displayed that the neuronal damage and apoptosis were attenuated by sh NC + oe EZH2 treatment, which was neutralized by sh FGF2 + oe NC. In addition, combined treatment with sh FGF2 and oe EZH2 augmented the neuronal damage and apoptosis than oe EZH2 alone. Moreover, the results of Western blot analysis exhibited a decline in the expression of LC3-I, LC3-II, Beclin-1, and P62 in the cortical tissue of Sevo-treated TBI rats following treatment with sh NC + oe EZH2, which was negated by FGF2 silencing. Combined treatment with sh FGF2 and oe EZH2 elevated the expression of LC3-I, LC3-II, Beclin-1, and P62 than oe EZH2 alone (Fig. 5H).
As described by the results of cell immunofluorescence staining (Fig. 5I), the hippocampal neuronal damage was diminished in Sevo-treated TBI rats treated with sh NC + oe EZH2, which was normalized after additional silencing of FGF2. Flow cytometric data (Fig. 5J) indicated that cell apoptosis was lowered by treatment with sh NC + oe EZH2, while the effect of EZH2 overexpression could be reversed by FGF2 silencing. In conclusion, Sevo activated the FGF2/EZH2 axis to decrease HES1 expression, thus inhibiting neurological injury in TBI rats. FGF2 acted not by increasing the protein level of EZH2, but by another mechanism.
Discussion
TBI can contribute to the reduction of memory loss or forgetfulness, level of awareness or consciousness, other neurological or neuropsychological abnormalities, and even death, with annual increase of incidence rate of TBI [21]. TBI also is an epigenetic risk factor for neurological diseases, such as Alzheimer's disease, Parkinson's disease, and depression, which trigger higher demands for institutional and long-term care [22]. Additionally, the neuroprotective role of Sevo has been identified in ischemic brain injury [23]. In this context, our research aimed to explore whether Sevo coffered neuroprotection against TBI and related potential mechanisms. Consequently, our data elucidated that Sevo might elevate FGF2 expression to enhance the methylation modification ability of EZH2 binding to HES1 promoter, thus diminishing HES1 transcription and then alleviating neuronal apoptosis and brain edema in TBI rats.
The initial finding in our research was that Sevo reduced brain edema and neuronal apoptosis and autophagy and improved the neurological deficits in TBI rats. Consistently, a research conducted by He et al. manifested that Sevo postconditioning was capable of repressing neuronal apoptosis and brain edema and improved nerve function in TBI rats, while the neuroprotective effects of Sevo postconditioning were reversed by the 3-MA treatment [8]. Sevoflurane postconditioning has been shown to protect the heart from ischemia-reperfusion (I/R) injury by restoring intact autophagic flux [24]. Meanwhile, a recent study has demonstrated that Sevo postconditioning can attenuate brain damage by inhibiting neuronal autophagy and apoptosis in cerebral I/R rats [25]. In addition, there exist mounting researches elaborating the neuroprotection of Sevo in numerous brain injuries. For instance, a prior work uncovered that Sevo exerted neuroprotective effects on electromagnetic pulse-induced brain injury by decreasing neuronal apoptosis and attenuating neurological deficits in rats [26]. Also, Sevo was able to improve the neurological scores, motor coordination, and neuronal injury to induce neuroprotection against cerebral ischemic brain injury [27]. In line with our results, another research elucidated that Sevo contributed to reduction in neuronal apoptosis and Fig. 3 Sevo activates EZH2 expression via FGF2 to repress neurological deficits in TBI rats. A Western blot analysis of EZH2 expression in the cortical tissue of sham-operated, TBI, or Sevo-treated TBI rats. Sevo-treated TBI rats were treated with sh NC, sh FGF2, oe NC, or oe FGF2. B Western blot analysis of EZH2 expression in the cortical tissue of TBI rats. Sevo-treated TBI rats were treated with sh NC, sh EZH2, or sh EZH2 + oe FGF2. C Immunofluorescence analysis of EZH2 expression in the cortical tissue of TBI rats. D Western blot analysis of EZH2 expression in the cortical tissue of TBI rats. E Neurological function assessment by mNSS, brain water content evaluation and motor function score in TBI rats. F BDNF and NeuN expression measured by RT-qPCR and Western blot analysis in the cortical tissue of TBI rats. G Nissl staining of the neuronal damage in the cortical tissue of TBI rats. H TUNEL-positive cells in the cortical tissue of TBI rats. I Protein expression of autophagy-related genes (LC3-I, LC3-II, Beclin-1, and P62) in the cortical tissue of TBI rats detected by Western blot analysis. J Flow cytometric analysis of hippocampal neuronal apoptosis in the cortical tissue of TBI rats. *p < 0.05 vs. sham-operated rats or Sevo-treated TBI rats treated with oe NC or oe NC + sh NC; # p < 0.05 vs. TBI rats or Sevo-treated TBI rats treated with sh NC or oe NC + sh EZH2. n = 12 for rats upon each treatment improvement in long-term cognitive function in neonatal rats after hypoxic-ischemic brain injury [28]. Therefore, these findings confirmed the neuroprotection of Sevo in TBI.
McNiel et al. [11] analyzed the cancer genome map (TCGA) and Oncomine. The data showed that KDM2B was related to EZH2 expression. Their further studies showed that FGF-2 could activate DYRK1A, phosphorylate CREB, and induce expression of histone H3K36me2/ ME1 demethylase KDM2B. Kottakis et al. [29] reported that KDM2B can cooperate with EZH2 to regulate cell proliferation, migration, angiogenesis, transformation, [30]. Through a series of animal and cell experiments, we further observed that Sevo triggered FGF2 upregulation, and that FGF2 overexpression diminished brain edema, neurological deficits, and neuronal apoptosis and autophagy in TBI rats. Coincidently, a previous work manifested the alleviation of blood-brain barrier damage and brain edema in rats with TBI after overexpressing FGF2 [31]. Similarly, another work also clarified that FGF2 overexpression resulted in attenuation of brain edema and neurological deficits and enhancement of the number of surviving neurons in injured cortex and the ipsilateral hippocampus of TBI rats, as well as lowered neuronal apoptosis and autophagy [17]. In addition, FGF2 overexpression reduced excessive neuronal autophagy and apoptosis to offer neuroprotection against transient global cerebral ischemia in rats [32]. More importantly, it has been detected in a prior study that FGF2 is capable of upregulating EZH2 in bladder cancer cells [29], which was partially consistent with our result. Specially, our result discovered that EZH2 could bind to the promoter of HES1 and decreased HES1 expression through its methylase function. Concordantly, a prior work indicated that EZH2 overexpression could contribute to inhibition of HES1 expression in erythroid cells [13]. Further analysis in our research illustrated that EZH2 overexpression depressed brain edema, neurological deficits, and neuronal apoptosis and autophagy in TBI rats by downregulating HES1. It was noted in a previous research that Sevo could obviously augment EZH2 expression to reduce over-activated autophagy, thus having neuroprotective effects in neonatal rats with hypoxic-ischemic cerebral injury [12]. Besides, the HES1 upregulation has been detected in mice with TB1 by the research of Wang et al. [33]. Consistently, a prior research suggested that HES1 overexpression was involved in promotion of neuronal apoptosis in rats with spinal cord injury [34].
Conclusions
Taken together, the findings from the present study suggest that Sevo attenuated neuronal apoptosis and autophagy to confer neuroprotection against TBI in rats by downregulating HES1 via activation of the FGF2/ EZH2 axis (Fig. 6). These findings may provide a better understanding regarding the mechanism of Sevo and FGF2 in TBI. Moreover, prospective studies that could translate these findings regarding the role of Sevoupregulated FGF2 in TBI into clinical applications will be greatly beneficial.
Additional file 1: Table 1. Rat grouping. Table 2. Information of antibodies. Table 3. Primer sequences for RT-qPCR. Table 4. Primer sequences for ChIP assay. Table 5. Primer sequences for MSP assay. Other target CREB Sevo treatment FGF2 Fig. 6 Schematic diagram of the mechanism by which Sevo affects TBI via the FGF2/EZH2/HES1 axis. Sevo upregulates FGF2 to elevate EZH2 expression, promote the methylation of the promoter of HES1, and inhibit the transcription of HES1, thereby inhibiting neuronal apoptosis and autophagy, and ultimately promoting the repair of neurological function in TBI rats of sham-operated, TBI, or Sevo-treated TBI rats. Sevo-treated TBI rats were treated with sh NC, sh FGF2, oe NC, or oe FGF2. D Western blot analysis of FGF2 expression in the cortical tissue of TBI rats. E The expression of BDNF determined by Western blot analysis in the cortical tissue of TBI rats. F Western blot analysis of the expression of NeuN in the cortical tissue of TBI rats. G Western blot analysis of EZH2 expression in the cortical tissue of sham-operated, TBI, or Sevo-treated TBI rats. Sevo-treated TBI rats were treated with sh NC, sh FGF2, oe NC, or oe FGF2. H, Western blot analysis of EZH2 expression in the cortical tissue of TBI rats.
|
2022-02-19T06:24:40.841Z
|
2022-02-17T00:00:00.000
|
{
"year": 2022,
"sha1": "ad6f54befb33db616f54294d05435616c44be716",
"oa_license": "CCBY",
"oa_url": "https://jneuroinflammation.biomedcentral.com/track/pdf/10.1186/s12974-021-02348-z",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e8b72b2af8453fa238765bf4dc2f5bae4d0ee086",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
53639473
|
pes2o/s2orc
|
v3-fos-license
|
Nearly frozen Coulomb Liquids
We show that very long range repulsive interactions of a generalized Coulomb-like form $V(R)\sim R^{-\alpha}$, with $\alpha<d$ ($d$-dimensionality), typically introduce very strong frustration, resulting in extreme fragility of the charge-ordered state. An \textquotedbl{}almost frozen\textquotedbl{} liquid then survives in a broad dynamical range above the (very low) melting temperature $T_{c}$ which is proportional to $\alpha$. This \textquotedbl{}pseudogap\textquotedbl{} phase is characterized by unusual insulating-like, but very weakly temperature dependent transport, similar to experimental findings in certain low carrier density systems.
I. INTRODUCTION
In designing novel materials, lightly doping a parent insulator is typically the method of choice. An especially intriguing situation is found in ultra-clean samples at finite doping, where neither the Anderson 1 (disorderdriven) nor the Mott 2 (magnetism-driven) route for localization can straightforwardly succeed in trapping the electrons. The tendency for charge ordering (CO) then emerges as the dominant mechanism that limits the electronic mobility. As first noted in early works by Wigner 3 and Mott 2 , this is precisely where the incipient breakdown of screening reveals the long-range nature of the Coulomb interactions. The corresponding CO states proved to be of extraordinary fragility, restricting the insulating behavior to extremely low densities and/or temperatures 4 . A broad range of parameters then emerges where puzzling "bad insulator" transport characterizes such nearly-frozen Coulomb liquids.
Unusual "bad-insulator" transport behavior has been observed in many systems.
Examples range from high mobility two-dimensional electron systems in semiconductors, 5 to lightly-doped cuprates, 6,7 manganites, 8 and even to the behavior of lodestone (magnetite) above the Verwey transition. 9 In all these cases, a broad range of temperatures has been observed, where the resistivity rises at low temperatures, but it does so with surprisingly weak temperature dependence. In contrast to conventional insulators, where the familiar activated transport reflects a gap for charge excitations, the "bad insulator" behavior has been interpreted 9 as a precursor to charge ordering, leading to very gradual opening of a soft pseudogap in the excitation spectrum.
The physical picture of a nearly-frozen Coulomb liquid has been proposed-on a heuristic level-by several authors, [9][10][11] providing a plausible and appealing interpretation of many experiments. The interplay of spins and charge degrees of freedom in pseudogap formation is still a controversial and unresolved problem. Therefore, to focus on the corresponding role of charge fluctuations, we deliberately ignore any spin effects, and consider a class of models of spinless electrons interacting through long-range interactions. . The pseudogap temperature T * (dashed line) remains finite as α → 0; a broad pseudogap phase emerges at α ≤ d. We also show T SR c ≈ 1 for the same model with short-range interactions (dotted line), and T RP A c (dot-dashed line) from the classical limit of RPA. The inset shows the corresponding plasmon mode spectral density, which assumes a scaling form for α ≪ 1. The fluctuations of these very soft "sheer plasmons" lead to the dramatic decrease of Tc.
We present the simplest consistent theory of this strongly coupled liquid state. We demonstrate that the existence of such an intermediate liquid regime ,which emerges at k B T c < k B T ≪ E c (see below), is a very general phenomenon reflecting strong frustration produced by long-range interactions. It holds for any interaction of the form V (R) ∼ R −α , both in continuum and lattice models at any dimension d ≥ 2, with α ≪ d. Ours is a microscopic theory that substantiates this physical picture, 9,11 based on quantitative and controlled model calculations. We present a physically transparent analytical description using extended dynamical mean-field theory (EDMFT) to accurately describe the collective charge fluctuations, and benchmark our result using Monte-Carlo MC simulations.
II. OUR MODEL AND THE EDMFT APPROACH
It has long been appreciated 4,12,13 that in Coulomb systems, the CO temperature scale T c is generally very small as compared to the Coulomb energy E c = e 2 /a (a being typical inter-particle spacing), which we use as our energy unit. For example, for classical particles on a half-filled hypercubic lattice T c ≈ 0.1, 12 while in the continuum and classical Wigner crystal T c ≈ 0.01 4 ; similar results are obtained both in d = 2 and in d = 3. Such large values of the "Ramirez index" 14 f = E c /T c suggest that geometric frustration plays a significant role, reflecting the long-range nature of the Coulomb force.
To clarify this behavior, we control the amount of frustration by introducing generalized Coulomb interactions of the form V (R)/E c = (R/a) −α . We consider a lattice model of spinless electrons given by the Hamiltonian Here c † i and c i are the electron creation and annihilation operators, t ij are the hopping matrix elements, n i = c † i c i , and R ij is the distance between lattice sites i and j expressed in the units of the lattice spacing. The origin of frustration is then easily understood by noting that in the classical limit our lattice gas model (n i = 0, 1) maps onto an Ising antiferromagnet (S i = ±1) with long-range interactions. Here, the maximum level of frustration is achieved for infinite range interactions (α → 0), and any finite temperature ordering is completely suppressed.
A controlled theoretical approach to our problem is available for very long-range interactions (α ≪ 1), which effectively corresponds to a very large coordination number. In this limit the spatial correlations assume a simplified form where the momentum dependence of the (fermionic) selfenergy Σ(iω n ) and the irreducible polarization operator Π(iΩ n ) can be ignored 26 . A conserving approximation that formally sums all the corresponding Feynman diagrams is given by the so-called EDMFT formulation, [15][16][17] where the relevant (local) quantities are computed from an auxiliary local effective action where G −1 0 (iω) = iω − ∆(iω) and δn(τ ) = n(τ ) − n . The dynamical effective-medium (EM) functions ∆ and Π −1 0 represent the respective fermionic and bosonic baths coupled to the given lattice site. For a given bath, the (local) Dyson's equations stipulate that where G loc and Π loc are calculated directly from S eff. The self-consistency loop is then closed by relating the local and the EM correlators, viz,
III. CLASSICAL LIMIT
The most stringent test for the accuracy of EDMFT is provided by examining the classical limit (t = 0), where pseudogap formation is most pronounced. Here, the EDMFT equations can be solved in closed form, 17 since the "memory kernel" Π −1 0 (τ − τ ′ ) becomes a timeindependent constant, Π −1 0 = D/β 2 , and the corresponding mode-coupling term in Eq. (3) can be decoupled by a static Hubbard-Stratonovich transformation. The density correlator then assumes the form Π k = (4 + D + βV k ) −1 , and the self-consistency condition reduces to where we introduced the (classical) plasmon-mode spec- at the corresponding ordering wave vector k = Q . The mechanism for T c depression is then easily understood by noting that for α ≪ 1 the spectral density ν(ε) assumes the scaling form ν(ε) = α −1 ν((ε−ε 0 )/α), where ε 0 ≈ −1; the explicit form of the scaling function ν(ε − ε 0 ) corresponding to the half-filled cubic lattice is shown in the inset of Figure 1. It features a sharp low-energy spectral peak of the usual dispersive form ν(ε) ∼ ε (d−2)/2 only at (ε − ε o ) < ε * (α), i.e below a characteristic energy scale ε * (α) ∼ α and a long high-energy tail of the form ν(ε) ∼ ε −2 . Physically, these low energy excitations correspond to "sheer" plasmon modes with wave vectork ≈ Q; the scale ε * (α) ∼ α thus plays a role of an effective Debye temperature. Its smallness sets the scale for the ordering temperature T c (α) = α´dε ν(ε)/ǫ ∼ ǫ * (α), in agreement with an estimate based on a Lindemann criterion applied to the sheer mode 27 .
In the classical limit, the single particle density of states (DOS) ρ(ω, T ) ≡ −ImG(ω + i0 + )/π assumes a simple bimodal form: with the self-consistently determined parameter D(T ) setting the scale of the Coulomb pseudogap ("plasma dip") E gap = D/β, which starts to open at the crossover temperature T * = D/4β. We stress that, in contrast to the ordering temperature T c ∼ α, both E gap and T * remain finite for α ≪ 1, since D(T ) ≈ β in this limit. This leads to the emergence of a broad pseudogap regime for α d, independent of the precise form or the filling of the lattice. Remarkably, since D(T ) remains finite as α → 0, both the density of states ρ(ω, T ) and the conductivity σ(T ) (see below) display only very weak α-dependence, in contrast to T c (α) ∼ α. We benchmark these analytical predictions against MC simulations which used careful finite-size scaling analysis and (generalized) Ewald summation techniques to account for long-range interactions (the detail is in the appendix). It was found that EDMFT captures all qualitative and even quantitative features of the pseudogap regime for several different values of the exponent α, both in dimensions d = 2 and in d = 3. The detailed comparison of EDMFT and MC results will be presented elsewhere; here we illustrate these findings for a d = 3 half- filled cubic lattice. Figure 1 shows how EDMFT accurately captures the α-dependence of T c , which is found to decrease in a roughly linear fashion as α → 0, while the T * ≈ 0.25 remains finite, producing a large separation of energy scales and a well-developed pseudogap regime. Note that the familiar Coulomb interaction (α = 1) lies well within the small-α regime. This observation makes it clear why our EDMFT theory remains very accurate (as noted in previous work 17 ) not only for α ≪ 1, but also for the physically relevant Coulomb case α = 1.
IV. GAUSSIAN THEORIES DO NOT CAPTURE PSEUDOGAP FORMATION
The excellent comparison between EDMFT and MC results for the DOS is shown for α = 0.3 in Figure 2(a). In contrast, the conventional approaches 18 , which typically assume Gaussian statistics for the collective charge fluctuations, fail to capture the pseudogap opening at T > T c . For example, the familiar self-consistent Gaussian approximation ("spherical model"), while predicting the exact same T c as EDMFT, produces Gaussian-shaped DOS at any T > T c , in contrast with MC findings; these shortcomings are especially dramatic for α ≪ d (see Figure 2). The popular "random-phase approxima-tion" (RPA), 19 which amounts to a non-self-consistent Gaussian approximation (SCGA), proves even less reliable in this regime. It grossly overestimates the freezing temperature T c , which is found [dashed line in Figure 2 (b)] to remain finite even as α → 0, completely missing the pseudogap regime (shaded area in Figure 1). Physically, the RPA (Stoner-like) freezing criterion reduces to the simplistic Hartree (static mean-field) approximation, which ignores the dramatic fluctuation effects of the soft collective (sheer plasmon) modes.
V. BAD-INSULATOR TRANSPORT IN THE SEMICLASSICAL REGIME
We expect the "bad insulator" transport to be best pronounced in the semiclassical regime t ≪ 1, where the Coulomb energy represents the largest energy scale in the problem. Here, the pseudogap phase is reached by thermally melting the CO state at T > T c (t). While our EDMFT equations are difficult to solve in general, in this incoherent regime it is well justified to utilize an adiabatic ("static") approximation, 18 which ignores the time dependence of the collective mode. The EDMFT equations can then be solved in a manner similar to that in the strict classical limit (see above), and we find Physically, the electrons travel in the presence of a static, but spatially fluctuating random field representing the collective mode. Its probability distribution P (φ) assumes a strongly non-Gaussian character, reflecting the charge discreteness captured by EDMFT, but ignored by conventional Gaussian theories such as RPA.
The semiclassical approximation remains valid 18 as long as the time-dependence of the density correlator Π(τ ) can be ignored, corresponding to This criterion provides an estimate for the crossover temperature T cros , below which we expect (at large t) a gradual crossover towards Fermi liquid behavior. The resulting phase diagram is shown on Figure 3 (a).
These equations are easy to solve for arbitrary parameters of our model, but we illustrate our findings in Figure 3, by showing explicit results for half-filled cubic lattice with α = 0.3. Our semiclassical solution is found to be valid in a broad pseudogap regime T c < T < T * , which spans almost an order of magnitude in temperature (for E F ≪ 1 we find T c ≈ 0.03 and T * ≈ 0.25).
Here the conductivity displays unusual, insulating-like [dσ(T )/dT > 0], but rather weak (almost linear) temperature dependence [shown in Figure 3(b)], surprisingly similar to that observed in magnetite above the Verwey transition. Our microscopic theory confirms the heuristic picture first proposed in early work of Mott. 9
VI. CONCLUSIONS
We argued that pseudogap behavior in Coulomb systems directly reflects strong frustration found in any system with very long-range repulsive interactions. We demonstrated that a quantitatively accurate strongcoupling description of this regime is possible using the interaction power α as a small parameter in the theory. The corresponding EDMFT equations were solved in the semiclassical regime where the pseudogap phenomena are most pronounced, explaining "bad-insulator" transport found in many puzzling experiments. It should be noted that, using appropriately formulated quantum impurity solvers, 21 the same formulation could be extended to investigate low-temperature quantum critical behavior for the same class of models. This fascinating direction remains a challenge for future work.
A. Ewald Potential
In order to compute the effective potential of longrange interaction 1/| r ij | α in hypercubic lattice we use an Ewald-type summation 22 with the help of the integral representation of 23,24 where Γ(α/2) is Gamma function. We switch the first term of the integral to a momentum sum because the sum does not converge rapidly in real space. Next, we use the representation 25 where f ( r) is any arbitrary function and on the righthand side the summation is over the vectors of the reciprocal lattice. We then integrate r out and change the variable of the integration in the first term t → 1/t. The final expression of the potential takes the form where in each component vector k i = 2πn i and n i ǫZ. At the maximum size of our Monte-Carlo simulation L = 24, the potential is accurate to the eighth decimal place with only |n i | = 3 in each axis, and ǫ = √ π .
B. Finite-size effects
In the vicinity of Wigner crystallization, the finite-size effects are very strong. The size dependence of the single particle density of states obtained from Monte-Carlo data for d = 3, α = 0.3, and T = 0.0554 is shown in figure 4.
(13) The nonlinear (two-Gaussian) fitting is done using IGOR 6.01. The fitting parameters, i.e., the distance between the Gaussian peaks and width squared as a function of L −α is shown in the Fig 4(b). This allows us to perform an accurate extrapolation to L = ∞, and the result is found to be in excellent agreement with EDMFT prediction. Note how the finite-size result remains very far from the L = ∞ extrapolant even for our largest system size (L = 24). Accurate results, thus, simply cannot be obtained without such finite size scaling analysis.
|
2011-09-19T19:27:14.000Z
|
2009-03-19T00:00:00.000
|
{
"year": 2010,
"sha1": "5ac9ba81095c4758a4ba63e67bd669641d46f752",
"oa_license": "publisher-specific, author manuscript",
"oa_url": "https://link.aps.org/accepted/10.1103/PhysRevB.84.125120",
"oa_status": "HYBRID",
"pdf_src": "Arxiv",
"pdf_hash": "4da39b6574fbbc6efe28944cd2cf75c5b6bfc54a",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
247356939
|
pes2o/s2orc
|
v3-fos-license
|
Transportation and Centering Ability of Kedo-S Pediatric and Mtwo Instruments in Primary Teeth: A Cone-beam ComputedTomography Study
Abstract Background Cleaning and debriding the canals and preserving the shape of the canal without deformation is the primary goals of pulpectomy. Transportation is a critical endodontic iatrogenic fault that could cause a catastrophe. This study evaluated the canal centering ability and canal transportation caused by Kedo-S pediatric and Mtwo instruments, using a cone-beam computed tomography (CBCT). Materials and methods This in vitro study was performed on distal roots of 50 primary mandibular first molars. The teeth were scanned using CBCT and randomly divided into two groups. The canals were then prepared using either Kedo-S or Mtwo files (n = 25). The instrumented canals were rescanned. The scanned volumes were sectioned at 2, 4, and 6 mm from cementoenamel junction (CEJ). Canal transportation (CT) and instrument centering ability were estimated and compared in both groups. Results The mean values for two study groups were compared. T -test was used to determine theP value. The Levene's test was used to test the significance between two groups. The two groups showed similar results in terms of transportation and centering ability (P > 0.05). Conclusion Kedo-S pediatric and Mtwo instruments demonstrated similar canal centering ability and CTs. How to cite this article Haridoss S, Rakkesh KM, Swaminathan K. Transportation and Centering Ability of Kedo-S Pediatric and Mtwo Instruments in Primary Teeth: A Cone-beam Computed Tomography Study. Int J Clin Pediatr Dent 2022;15(S-1):S30-S34.
s31
were used to assess the CBCT images. The CEJ was taken as a reference point. The canal preparation was measured at three levels.
The cervical level was assessed at 2 mm below the CEJ. The middle level was assessed at 4 mm below CEJ. The apical level was assessed at 6 mm below CEJ.
Canal Transportation
Voxel measurements were used to quantify noninstrumented and instrumented canals, while M1 was used to calculate the number of voxels at the mesial wall of the noninstrumented canal from the outer surface of the mesial portion of the root. M2 was the calculation, after instrumentation, of the number of voxels from the outer root surface of the mesial part of the root to the canal wall. The calculation of the number of voxels from the outer surface of the distal root portion to the distal wall of the noninstrumented canal was D1. D2 was the calculation after instrumentation of the number of voxels from the external surface of the distal portion of the root to the distal surface of the canal (Figs. 1 and 2).
From the following equation, CT was assessed (CT) = (M1 -M2) -(D1 -D2) CT equal to 0 (zero) meant lack of transport as regards the direction of canal transport; negative value indicates transportation to the distal trend and transportation to the mesial trend is indicated by a positive value. 12 assessed using 15 root canal instrument .All the samples teeth were then divided into two groups containing 25 teeth each (Flowchart 1).
Canal Preparation
In group I, E1 Kedo-S files with X Smart Endodontic motor (Dentsply Maillefer, Switzerland) were used to prepare the canal with 300 rpm speed and torque of 2.2 N cm. Mtwo Basic Sequence NiTi rotary files (VDW, Munich, Germany) driven by an X Smart Endodontic motor (Dentsply Maillefer, Switzerland) at a speed of 300 rpm and a torque of 1.2 N cm were used for canal preparation in group II. The canals were prepared for the full length by single length technique without early coronal enlargement. Three Mtwo Basic sequence instruments (no.10 size to no. 20 size) were used in primary teeth.
Canals were irrigated with 3 mL of a 5.25% NaOCl solution (27-gauge needle). Glyde (Dentsply, Maillefer) was used for lubrication during instrumentation and after instrumentation; each instrument was changed after five canals.
Specimen Scan
Teeth were scanned before and after canal preparation with CBCT regeneration. 13 The root canal geometry is different in primary teeth and hence it is important to assess the canal preparation using different instruments. 14 The strength of the nickel-titanium (Ni-Ti) rotary systems was to uniformly, smoothly prepare the curved canals and maintain the shape with less instrument time and canal tapering than hand instruments. 15 In the present study, teeth with a least amount 7 mm of root length were selected to simulate a clinical conditions. As it offers accurate three-dimensional (3D) observation, CBCT imaging has been used in measuring dentin thickness removal, canal curvature, transportation, and canal cantering ratio. 16 Therefore, the objective of this research was to CBCT evaluation of the transportation and centering ability with rotary files of Kedo-S and Mtwo.
In our study, the Mtwo rotary system shows no transportation at 2, 4, and 6 ,mm. This finding was similar to the results of previous studies that evaluated the preparation of curved canals by using Mtwo files and other NiTi rotary files. It was reported that the Mtwo files conserved canal curvatures better than the K3, race, 3,4 and Protaper instruments. 17 Owing to the design of Mtwo files, fewer preparation errors have been reported. 18 However, no significant difference was noted in this respect between the two systems. Both systems in all sections mostly recorded < 0.1 mm of canal transportation, which is within the clinically acceptable range given by Peters. 19 In this study, both the rotary file systems maintain the canal centering ability better in the middle level are in agreement with Selvakumar et al. who observed K3 rotary file (2 and 4% taper) maintain the centering ability better than stainless steel. 20 On the contrary, Gambil et al. have also found no significant difference between NiTi and K-flex instruments. 12 The risk factors for canal transportation and centering ability are complex radicular anatomy, the lack of direct access, instrument design, incorrect sequences of the usage of different instruments,
Canal Centering Ability
The following equation was used to determine the canal centering ability centralization ability ratio = (M1 − M2)/(D1 − D2) A result equal to 1.0 demonstrated complete centralization ability ratio. This meant that the instrument had a lesser capacity to sustain itself in the central axis of the canal while this value was closer to zero. 12
Statistical Analysis
The mean and standard deviation for canal transportation and canal centering ability were estimated and T-test was used to calculate theP value. Levene's test was used to calculate the level of significance and was set at 0.05.
results
In this study, canal transportation and centering ability was examined at 2, 4, and 6 mm from CEJ. The mean CT for the mentioned diameters is listed in Table 1 and Figure 3. There was no considerable difference between the two systems, based on statistical evidence, in terms of canal transportation.
The frequency of the direction of transportation is shown in Table 2. In general, transportation to the distal area in both systems was higher than transportation to the mesial area, although it was not a statistically significant difference.
The Kedo-S and Mtwo rotary files maintain the canal centering ability better in the middle level when compared to the cervical and apical levels. Both the files were not statistically significant (P < 0.05) in maintaining the canal centering ability (Table 3).
dIscussIon
The basic principles for biomechanical preparation in endodontic treatment are complete removal of vital tissue, necrotic remnants, debris, and infected dentin thereby render tissue repair and 21 In these risk factors, the instrument design and internal canal morphology are intrinsic factors that are independent of the operator's expertise and skill. Among these two factors, the instrument design can be modified. In our study, the Kedo-S file shows more distal displacement even though it is not statistically significant by modifying the shape of the instrument might reduce the transportation of the file. The impediment of this investigation was smarter to compare the outcomes with a conventional stainless steel file. Notwithstanding, transportation has not conceded to the highest quality level. 22 Since the lower transportation of NiTi files compared to that of stainless steel hand instruments is already established, 23 we focused on recently presented newly introduced pediatric rotary with existing NiTi instruments as they were. Another limitation was that regardless of our endeavor to standardize the samples using the exclusion/inclusion criteria, extracted teeth cannot be completely standardized in terms of canal shapes and sizes. 24,25 The quality purposes of our investigation are canal preparation was performed in natural teeth consequently, its outcomes could be better generalizable to the clinical practice. The procedures were performed by one operator (high reproducibility of results), utilizing software calculations (high precision), and utilization of CBCT.
conclusIon
Within the restrictions of this study, no difference was noted in canal transportation and centering ability of the rotary files used in this study. Thus, both systems can be used with minimum endangerment of procedural errors in root canal preparation. In terms of canal transportation and centering ability, Kedo-S file would be safer in primary teeth. However, further investigations are needed to evaluate the performance of Kedo-S pediatric files in the uneven walls of primary teeth with larger sample size. * P value = Significant P value, † P value < 0.05 = Statistically significant.
|
2022-03-09T18:51:26.392Z
|
2022-02-28T00:00:00.000
|
{
"year": 2022,
"sha1": "b2c96d0c485dfbe0cc661be91ce0a1f9043c5081",
"oa_license": "CCBYNC",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "4f297c4e4022b45c526f95abf34b5ff5a86db92b",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
270757528
|
pes2o/s2orc
|
v3-fos-license
|
Research on the Brand Construction of Geographical Indication Agricultural Products in Hainan under the Background of Rural Revitalization
: At present, the rural revitalization strategy is being promoted, and Hainan geographical indication agricultural products have gradually attracted attention because of their unique flavor and high quality. However, these products face problems such as low brand awareness, insufficient quality standardization and single marketing means. In order to under - stand these challenges and seize the opportunities, this paper aims to analyze the current situation of Hainan geographical indication agricultural products and explore its brand building strategies to promote the expansion of domestic and foreign markets and support rural revitalization and sustainable development.
Introduction
With the deepening of the rural revitalization strategy, Hainan geographical indication agricultural products have gradually attracted the attention of the market because of their unique regional characteristics and high quality.However, there are many challenges in brand building, including the lack of brand influence, product quality standardization and the need to strengthen supervision, and the single marketing channels.The purpose of this study is to analyze the current situation of agricultural products and explore effective brand building strategies according to the difficulties and opportunities.By enhancing brand awareness, strengthening quality supervision and innovating marketing mode, it will help Hainan geographical indication agricultural products expand in domestic and foreign markets, and provide solid support for rural revitalization.
Analysis of the current situation of agricultural products
Hainan province has unique natural resources and ecological environment, and breeds a variety of agricultural products with geographical indications.These products, according to the regional characteristics of their growth, are divided into tropical fruits, seafood, tea and other categories.Each type of products has its own unique flavor and quality, such as Wenchang chicken, coconut, Hainan Yellow Lantern pepper, etc., which are the representatives of Hainan geographical indication agricultural products.
However, in the face of the current competitive market environment, Hainan geographical indication agricultural products have encountered many difficulties in the process of promotion.First of all, compared with other existing wellknown brands, the brand influence of Hainan geographical indication agricultural products is limited, resulting in consumers' lack of understanding of these unique products, and their recognition is far from enough.This not only limits the market expansion of the brand, but also makes it difficult for the products to impress consumers.Secondly, although Hainan geographical indication agricultural products are famous for their unique advantages, their quality control and product standardization still need to be strengthened.These deficiencies to some extent weaken the competitiveness of the product in the market and limit its performance in a broader market.Moreover, the existing marketing channels are relatively single, lack of innovation and diversified promotion strategies, and fail to fully tap and utilize the market potential.These challenges urgently need Hainan geographical indication agricultural products to take effective measures to comprehensively improve product quality, enrich marketing means, and enhance brand awareness, so as to better promote market expansion.
Despite the challenges, the brand construction of Hainan geographical indication agricultural products is also facing great opportunities.With the implementation of the rural revitalization strategy and the increase of consumers ' demand for healthy and safe food, Hainan's geographical indication agricultural products have great market potential by virtue of their unique quality and safety.In addition, the development of Internet + and e-commerce has provided new sales channels and brand promotion methods for Hainan geographical indication agricultural products, which helps to enhance brand awareness and market share.
Hainan geographical indication of agricultural products brand construction strategy
In order to effectively enhance the brand awareness and market influence of Hainan geographical indication agricultural products, it is first necessary to adopt a series of targeted strategies to optimize its brand construction.
Enhance brand awareness and market influence
(1) Use of digital media and social platforms: With the advent of the digital era, the use of Weibo, wechat, TikTok and other social media platforms for brand promotion has become an important means to enhance brand awareness.Making high-quality content, such as videos of product planting and processing process, as well as favorable cases after consumer use, can effectively attract consumers' attention and improve the credibility and attractiveness of the brand.[1] (2) Participate in domestic and foreign exhibitions: By participating in domestic and foreign agricultural products and food exhibitions, and directly display the advantages and characteristics of Hainan geographical indication agricultural products to wholesalers, retailers and final consumers, which can effectively improve the exposure and awareness of the brand.
Strengthen the standardization and quality supervision of agricultural products with geographical indications
(1) Establish and improve product quality standards: cooperate with the General Administration of Quality Supervision, Inspection and Quarantine, formulate and implement a set of scientific and strict quality standard system for agricultural products with geographical indications.By setting strict production, processing, storage and transportation standards, to ensure that each batch of geographical indication products can meet high quality requirements.
(2) To strengthen the construction of quality supervision and traceability system: establish and improve the geographical indications of agricultural products quality supervision system and product traceability mechanism, each production should have a clear record and traceability label, to ensure that the product from the field to the table every step can be tracked, controlled, strengthen consumer trust in Hainan geographical indications of agricultural products.
Innovate the marketing mode and expand the sales channels
(1) Combining online and offline sales model: on the one hand, open official flagship stores through e-commerce platforms, and display product features through live broadcasts and short videos to attract young consumers; on the other hand, establish offline experience stores or set up special areas in supermarkets and specialty stores, so that consumers can personally experience the unique flavor of products.
(2) Cross-border cooperation: Cross-border cooperation with well-known brands, such as cooperation with tourism, hotels and health products brands, jointly promotion, and drive the sales of agricultural products with geographical indications through the brand effect of partners.
(3) Customized services: Provide customized products or gift boxes for specific groups to meet the personalized needs of consumers and improve the added value of products.
Through the implementation of the above strategy, the brand awareness and market influence of Hainan geographical indication agricultural products can be effectively enhanced, and lay a solid foundation for Hainan rural revitalization.At the same time, it can also promote the sustainable development of agricultural products with geographical indications, bring more benefits to producers, and provide consumers with more high-quality choices.[2]
Promote the expansion of domestic and foreign markets of Hainan geographical indication agricultural products
In order to promote the expansion of Hainan geographical indication agricultural products in the domestic and foreign markets, a series of strategic measures need to be taken to show the effectiveness of these strategies combined with actual case analysis.
The application of the strategy in the domestic market
First, in the domestic market, the focus is on improving brand awareness and consumer awareness.This can be achieved by strengthening integrated marketing, both online and offline.For example, social media, e-commerce live broadcasting and other online means are combined with the offline display and sales of large supermarkets and featured agricultural products markets to provide consumers with a full range of purchasing experience.At the same time, we can also cooperate with tourist attractions to launch the experience activity of "combining agriculture and tourism", so that tourists can experience picking, tasting and other activities on site, and enhance their impression of Hainan's geographical indication agricultural products.In addition, holding or participating in various kinds of agricultural products trade fairs is also an effective way to enhance the visibility of the domestic market.Through the participation, we can directly establish contact with wholesalers, retailers and consumers, timely collect market feedback, and adjust product strategies.
Analysis of international cooperation and export strategies
In the international market, the expansion of Hainan geographical indication agricultural products needs to emphasize the advantages and characteristics of products, and enhance the international competitiveness of products through international cooperation and certification.For example, obtaining international certification standards (such as EU organic certification, US USDA certification, etc.) is crucial to improving the recognition of products in the international market.At the same time, the establishment of cooperative relations with overseas distributors, using their resources and experience in the local market, can more effectively promote Hainan geographical indication agricultural products.Participation in the international food exhibition is also an important channel to enter the international market.This can not only provide direct contact with foreign buyers, but also to understand the latest trends in the international market and consumer preferences, to provide the basis for the international marketing strategy of products.[3]
Successful case of Hainan geographical indication agricultural products
Take Hainan yellow lantern pepper as an example, it is one of the geographical indication products of Hainan.Through the government's vigorous promotion and the active participation of local enterprises, the yellow lantern pepper is not only sold well in the domestic market, but also successfully expanded in the international market.The product through the international organic certification, increase the competitiveness in the European and American markets.At the same time, through participating in the international food exhibition, and a number of foreign distributors to establish cooperation, so that the yellow lantern pepper can be accepted and loved by more foreign consumers.
In addition, Hainan Coconut is also a successful brand building case.Hainan takes advantage of its unique natural conditions to produce high-quality coconut products, and emphasizes its natural and healthy brand image through brand story marketing, which has successfully attracted the attention of a large number of domestic and foreign consumers.Through cooperation with international airlines, we provides Hainan coconut water and other products on international flights, which has further enhanced the international influence of the brand.[4] It can be seen from these cases that the brand construction and market expansion of Hainan geographical indication agricultural products, not only need to pay attention to internal quality management and brand construction, but also need to actively explore domestic and foreign markets, and through various channels and methods, let more consumers understand and subscribe for agricultural products of Hainan geographical indication agricultural products.
Conclusion
In a word, Hainan geographical indication agricultural products have great development potential in the era of globalization and network.By optimizing the brand strategy, improving product quality, innovating marketing channels, and expanding the domestic and foreign markets, Hainan's geographical indication agricultural products will be able to conform to the background of the rural revitalization strategy and achieve sustainable development.Successful cases show that, with scientific management system and effective market strategy, Hainan agricultural products with geographical indications can stand out in the fierce market competition, and can also provide real help for local economic development and farmers' income increase.
|
2024-06-27T15:26:38.916Z
|
2024-05-08T00:00:00.000
|
{
"year": 2024,
"sha1": "4bc606c247f9f229991cba7d7db0fc2f722f492e",
"oa_license": "CCBYNC",
"oa_url": "https://en.front-sci.com/index.php/memf/article/view/1983/2206",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "a3ea28b39ce639c1e599df270377905cbcd9e412",
"s2fieldsofstudy": [
"Agricultural and Food Sciences",
"Business",
"Economics"
],
"extfieldsofstudy": []
}
|
245181557
|
pes2o/s2orc
|
v3-fos-license
|
3D-PHARMACOPHORE MODELLING OF OMEGA-3 DERIVATIVES WITH PEROXISOME PROLIFERATOR-ACTIVATED RECEPTOR GAMMA AS AN ANTI-OBESITY AGENT 3d-Pharmacophore modelling , Omega-3 derivatives, PPAR- γ, and
Objective : The aim of this work was to study the pharmacophore model of omega-3 derivatives with the PPAR- γ receptor using LigandScout 4.4.3 to investigate the important chemical interactions of complex structure. Methods : The methods consisted of structure preparation of nine chemical compounds derived from omega - 3 fatty acids, database preparation, creating 3D Pharmacophore modelling, validation pharmacophore, and screening test compounds. Results : The result of the research showed that the omega- 3 derivatives docosahexaenoic acid (DHA), when eicosapentaenoic acid (HPA), and docosapentaenoic acid (DPA) have the best pharmacophore fit values of 36.59; 36.56; and 36.56, respectively. According to the results of the pharmacophore study, the carbonyl and hydroxyl of the carboxylate functional groups become the active functional groups that exhibit hydrogen bonding interactions. While the alkyl chain (Ethyl and methyl groups) was the portion that can be modified to increase its activity. Conclusion : Omega-3 derivatives could be used as a lead drug for the powerful PPAR- γ receptor in the prevention and treatment of obesity. Obesity
INTRODUCTION
Obesity is one of the most common diseases, defined by a considerable expansion and alteration of adipose tissue in the body. Obesity has also been linked to the pathogenesis of the metabolic syndromerelated cardiovascular disease, which is the leading cause of mortality worldwide [1,2]. Nowadays, the treatment and prevention of obesity disease have been done by behavioral therapy, pharmacological treatment, and surgical intervention. Although, there are many side effects which can reduce the quality of life. In light of this, the usage of natural substances is the most viable alternative [3].
Omega-3 is an essential nutrient that has been shown to assist lose weight by lowering the accumulation of body fat. Omega-3 fatty acids play an important role in controlling lipid metabolism and acting as anti-inflammatory sensors. Although the method for preventing obesity comorbidity is unknown, it has been found to reduce insulin resistance, which is linked to obesity-related metabolic diseases, by binding to the protein peroxisome proliferator-activated receptor PPAR-γ [4].
The Peroxisome Proliferator-Activated Receptors (PPARs) family regulates adipocyte differentiation, lipids, insulin sensitivity, and glucose homeostasis. PPAR-γ, which actively acts on adipose tissue and macrophages, triggers the differentiation of fat cells and regulates fatty acid storage and glucose metabolism by influencing related genes. Some anti-obesity medications that target PPAR-γ have full agonist activity, which is associated with a high risk of cardiovascular adverse effects [5,6].
We had conducted a molecular docking study of Omega-3 derivatives compounds with Peroxisome Proliferator-Activated Receptor Gamma (PPAR-γ) in a prior project. Based on the lowest binding energy, type of amino acid residue, and inhibition constant, we discovered that Docosahexaenoic acid has the best activity [7]. Also, because docosahexaenoic acid has only partial agonist action, it is assumed that it has no adverse effects on the cardiovascular system [7]. Although due to the lack of a detailed explanation of the molecular interaction between the drug and the receptor, this discovery remains uncertain. In addition, the functional groups that interact with the receptor are not determined in detail. Therefore, the aim of this work was to performed the ligand-based drug design study of the pharmacophore model of omega-3 derivatives with the Proliferator-Activated Receptor Gamma (PPAR-γ) using LigandScout 4.4.3 to investigate the important chemical interactions of complex structure.
Structure preparation
Nine chemical compounds derived from omega-3 fatty acids were chosen based on previous research concerning their bioactivity and pharmacological characteristics. The omega-3 derivatives that were chosen are as follows: hexadecatrienoic acid (HTA), alpha-linolenic acid (ALA), stearidonic acid (SDA), eicosatrienoic acid (ETE), eicosatetraenoic acid (ETA), eicosapentaenoic acid (EPA), heneicosapentaenoic acid (HPA), docosapentaenoic acid (DPA), and docosahexaenoic (DHA). The 2D structures were generated with the ChemDraw 2D Ultra 12.0 program, and the energy was minimized using MM2 by ChemDraw 3D software, then all of the structure was saved with (. pdb) format [8]. The 2D Molecular Structure of Omega-3 Derivatives can be seen in fig. 1
Database preparation
Several databases are required for pharmacophore modeling, including the Active compound database, Decoy database, and test compound database. The test compounds were obtained through the previous preparation process, whereas the active and decoy compounds were downloaded from http://dude.docking.org/. Then, using LigandScout 4.4.3, each one was opened with the type of "training" compound for Active and Decoy, and the type of "test" for the test compound. The databases are then saved in. ldb format [9].
Creating pharmacophore
Ligand Scout 4.4.3 was used to perform pharmacophore modeling. The active compound database file that was previously prepared was opened and then sorted by cluster on the ligand-based menu. Each cluster is made up of one or more compounds, one of which must be converted into a training compound for each cluster, while the others are changed to the type "ignored." The pharmacophore model was then created, and the top ten pharmacophore models were validated [9]. The ten pharmacophore models that were obtained were tested one by one to determine which one was the best. In the screening column, each pharmacophore model is entered, as well as the database for active and decoy compounds. Then click the "screening perspective" to perform screening pharmacophores. The Receiver Operating Characteristic (ROC) curve was used to assess the validity of the pharmacophores [9].
Screening test compounds
The test compound database that was previously generated is loaded in the screening column. In the ligand-based section, the best pharmacophore model was then sent to the screening column for further processing to determine the best compound based on the highest pharmacophore fit score [9].
RESULTS
The ROC curve was used to assess the validity of the pharmacophore. The trade-off between sensitivity (or TPR) and specificity (1-FPR) is depicted by the ROC curve. Classifiers with curves that are closer to the top-left corner perform better. The test becomes less accurate when the curve approaches the ROC space's 45-degree diagonal [10]. From the study, we had founded that pharmacophore model 4 has best ROC curve value. The nine compounds of Omega-3 derivatives were screened for pharmacophore similarity to the best pharmacophore model (model 4), which is a pharmacophore composed of compounds that have been shown to have activity targeting PPAR-γ receptors. The activity was evaluated based on the pharmacophore fit value. A higher fit score indicates a better fit to the model. From this study, docosahexaenoic (DHA), docosapentaenoic (DPA), and eicosapentaenoic (EPA) have the highest pharmacophore fit value. In a molecular interaction study, the arrangement of functional groups that act as active sites of a structure was studied and assessed against their role in interacting with receptors. The angle and distance to the confirmation of the functional groups that make up a molecule have a significant effect on the ability to interact with receptors [11][12][13]. By comparing the results of pattern interaction between docosahexaenoic acid and propionic acid, we found that both have a similar amino acid residue on the carboxylic acid functional group, which can be seen on fig. 3A and 3B; these compounds interact with TYR473A and SER289A amino acid residue.
DISCUSSION
At the beginning with pharmacophore validation, the ROC curve values were derived based on the validation findings of the 10 best pharmacophore models (0.65, 0.61, 0.78, 0.66, 0.54, 0.75, 0.75, 0.66, and 0.62), where these results demonstrated that 3 out of 10 pharmacophores met the requirements. Models 4, 7, and 8 were validated (≥0.7). Model 4 was chosen because it had the greatest ROC curve value (AUC: 0.78), indicating that the pharmacophore model was able to properly distinguish actual active from decoy PPAR-γ molecules in accordance with Kirchmair's methods [14]. The ROC curve value can be seen in fig. 2. In addition, as a result of pharmacophore fit value, compounds that fit the pharmacophore model should likewise have PPAR-γ activity.
Because not all of the model's features could be matched, throughout the virtual screening process, two features could be excluded. In this instance, pharmacophore fit scores would be lower if features could not be matched. Interestingly, all of the derivatives had higher pharmacophore fit scores (34.46 to 36.59) than the parent compound, with docosahexaenoic (36.59) and docosapentaenoic (36.56) having the best pharmacophore fit scores with the lowest of binding energy (-11.31 and-11.01 kcal/mol, respectively), indicating that their chemical features aligned best with the features of the pharmacophore model [15]. A higher geometric alignment of the compound's characteristics to the 3Dpharmacophore model is indicated by a higher fit score. Each compound's binding energy is presented for comparison (table 1). The suitability of the pharmacophore features on the ligands makes it easy for these to interact, which is correlated with lower bonding energy values. Binding energy value describes how spontaneous an interaction will occur. The lower the binding energy, the lower the activation energy, implying that it does not need a lot of energy to create the contact system between the ligand and the receptor, resulting in a spontaneous reaction. In addition, the best four compounds based on the fit-pharmacophore value have a correlation with the binding energy value, with the highest fitpharmacophore value having the best binding energy.
Furthermore, pharmacophore Modelling and Molecular Interaction Propionic acid (the parent compound) was employed as a lead compound or a comparative for Omega-3 derivatives in this investigation. Propionic acid was chosen as the lead compound since it has been shown to interact with Peroxisome Proliferator-Activated Receptors in prior investigations. Furthermore, the Nmethylene-substituted indole 5-propionic acid provides a suitable bio-isosteric replacement for the known tyrosine-based scaffold in PPAR-γ. The carboxylate and nitrogen groups in oxazole from Propionic Acid become acceptors of hydrogen bonding interactions, which interact with amino acids SER289A, TYR473A, and HOH1073A, as shown by pharmacophore modeling studies ( fig. 3-4) [16].
This result is analogous to the hydrogen bonding interactions formed in Omega-3 derivative compounds (docosahexaenoic acid), where the carboxylic groups' hydroxy (OH) and carbonyl (C=O) interactions function as donors and acceptors of hydrogen bonding interaction. And because of amino acids bound in TYR473A and SER289A are similar with the hydrogen bonding interactions that are presence in propionic acid-PPAR-γ interaction and also same of functional group (Hydroxy carbonyl from carboxylic acid), this indicates that Docosahexaenoic Acid has the same mechanism of action [17]. Based on these results that the lead compound (Propionic Acid) has one sort of hydrogen bonding contact, namely the nitrogen atom in oxazole with the amino acid HCH1073A, which Docosahexaenoic Acid does not, but by considering two types of amino acid residues in the hydrogen bonding interactions. As well as, docosahexaenoic acid can be considered to have the same action as other identical amino acids due to have a comparable with hydrophobic interaction (LEU330A, MET38A, ILE341A, ILE281A, ILE362A, MET364A, PHE282A, LEU453A, and PHE363A). The fact that these two drugs have a comparable dominant interaction implies that they are positioned at the same active site. The carboxylate group is a pharmacophore structure that plays a key role in interacting with the PPAR-γ receptor, as seen in the two structures.
CONCLUSION
The carbonyl and hydroxyl of the carboxylate functional groups from docosahexaenoic acid become the active functional groups that exhibit hydrogen bonding interactions, according to the results of the pharmacophore study. The alkyl chain (the ethyl and methyl groups) in docosahexaenoic Acid is the part that can be modified to boost activity. The fact that docosahexaenoic acid type of interaction is identical that occurs in the carboxylate group, both on the parent compound and on docosahexaenoic Acid with the same amino acid residues on TYR473A and SER289A. It suggests that docosahexaenoic acid could be used as a potential drug for the powerful PPAR-γ receptor in the prevention and treatment of obesity.
|
2021-12-16T16:19:00.695Z
|
2021-12-11T00:00:00.000
|
{
"year": 2021,
"sha1": "93270901de3be5a5c840bc09478817c09140f541",
"oa_license": "CCBY",
"oa_url": "https://innovareacademics.in/journals/index.php/ijap/article/download/43851/25808",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "3543b3051e9db10cccd65995fc1456888496913d",
"s2fieldsofstudy": [
"Medicine",
"Chemistry"
],
"extfieldsofstudy": []
}
|
10893830
|
pes2o/s2orc
|
v3-fos-license
|
Increased Growth-Inhibitory and Cytotoxic Activity of Arsenic Trioxide in Head and Neck Carcinoma Cells with Functional p53 Deficiency and Resistance to EGFR Blockade
Background and Purpose Mutations in the p53 gene are frequently observed in squamous cell carcinoma of the head and neck region (SCCHN) and have been associated with drug resistance. The potential of arsenic trioxide (ATO) for treatment of p53-deficient tumor cells and those with acquired resistance to cisplatin and cetuximab was determined. Material and Methods In a panel of 10 SCCHN cell lines expressing either wildtype p53, mutated p53 or which lacked p53 by deletion the interference of p53 deficiency with the growth-inhibitory and radiosensitizing potential of ATO was determined. The causal relationship between p53 deficiency and ATO sensitivity was evaluated by reconstitution of wildtype p53 in p53-deficient SCCHN cells. Interference of ATO treatment with cell cycle, DNA repair and apoptosis and its efficacy in cells with acquired resistance to cisplatin and cetuximab was evaluated. Results Functional rather than structural defects in the p53 gene predisposed tumor cells to increased sensitivity to ATO. Reconstitution of wt p53 in p53-deficient SCCHN cells rendered them less sensitive to ATO treatment. Combination of ATO with irradiation inhibited clonogenic growth in an additive manner. The inhibitory effect of ATO in p53-deficient tumor cells was mainly associated with DNA damage, G2/M arrest, upregulation of TRAIL (tumor necrosis factor-related apoptosis-inducing ligand) receptors and apoptosis. Increased activity of ATO was observed in cetuximab-resistant SCCHN cells whereas cisplatin resistance was associated with cross-resistance to ATO. Conclusions Addition of ATO to treatment regimens for p53-deficient SCCHN and tumor recurrence after cetuximab-containing regimens might represent an attractive strategy in SCCHN.
Introduction
Arsenic trioxide (ATO) which has been used for more than 2,000 years in Chinese traditional medicine for treatment of almost every disease has made a remarkable comeback into classical medicine after its high efficacy for treatment of acute promyelocytic leukemia (APL), reported by Chinese doctors, had been confirmed by the results from randomized clinical trials in Europe and the United States [1][2][3]. The impressive complete remission and survival rates observed in APL prompted the subsequent testing of ATO also in other neoplastic diseases. These studies revealed that besides specifically targeting the promyelocytic leukemia gene product (PML) and the APL-specific fusion protein of PML with the retinoic acid receptor alpha (PML-RARa) thereby promoting cell differentiation of leukemia cells, ATO can interfere with mitochondrial functions, the cellular redox system, the cell cycle and apoptosis. Since these cellular functions are generally involved in the response of tumor cells to ionizing radiation the radiosensitizing efficacy of ATO was subsequently evaluated. The first report of a synergistic activity of ATO in combination with radiotherapy came from a murine solid tumor model [4] and these early promising results were subsequently confirmed in xenograft models of glioma [5,6], fibrosarcoma [7], cervical cancer [8] and oral squamous cell carcinoma [9]. Of note, despite its radiosensitizing activity in tumor tissue the addition of ATO to radiotherapy did not result in a significant increase in normal tissue toxicity [4,9].
As predictive biomarker for enhanced pro-apoptotic and growth-inhibitory activity of ATO structural defects in the p53 gene have originally been described in models of B-cell lymphoma [10] and multiple myeloma [11,12] which could also explain the low toxicity profile in normal cells expressing wildtype (wt) p53. Since p53 mutations occur very frequently in SCCHN and have been linked to shorter overall survival [13], increased risk of local recurrence [14,15] and radioresistance [16] the combination of radiotherapy with ATO might represent a novel promising therapeutic strategy in SCCHN. To address this question we evaluated in the present study whether p53 deficiency might be predictive for increased cytotoxic and growth-inhibitory activity of ATO in SCCHN cells. The effects of ATO alone and its combination with irradiation (IR) on clonogenic survival, cell cycle progression and apoptosis were evaluated in a panel of p53deficient and -proficient SCCHN cell lines. Since ATO treatment has also been shown to activate the EGFR pathway [17], to interfere with surface EGFR expression levels [18] and to modulate EGFR-mediated DNA double-strand break repair [19] we also assessed the growth-inhibitory activity of ATO in a SCCHN cell line model of acquired cetuximab resistance. In addition, potential cross-resistance between ATO and cisplatin was evaluated.
All cell lines with the exception of UD-SCC-2 were HPVnegative. A detailed review on general characteristics and molecular features of these cell lines has been previously published by Lin and coworkers [25]. Two cell line models of acquired resistance to cisplatin and cetuximab were established by treating FaDu and UT-SCC-9 with increasing doses of cisplatin or cetuximab, respectively, for a period of 4 to 8 months.
Cells were cultured in Minimal Essential Medium (MEM) supplemented with 15% heat-inactivated fetal bovine serum and 1X non-essential amino acids. All cell culture reagents were from Gibco (Invitrogen, Darmstadt, Germany). Cell cultures were incubated at 37uC and 5% CO 2 in a humidified atmosphere. Arsenic trioxide (ATO) was purchased from Sigma-Aldrich (Munich, Germany). It was dissolved in 1 M sodium hydroxide (NaOH) solution to generate a 25 mM-solution which was further diluted in H 2 O to generate a 1 mM-stock solution. Working solutions were freshly prepared from the stock solution by dilution in cell culture medium on the day of the experiment. Cetuximab was provided by Merck Pharma GmbH (Darmstadt, Germany). Cisplatin was purchased from Sigma-Aldrich.
Molecular analysis of the p53 genotype
The previously reported gene sequence of p53 within the coding region of the SCCHN cell lines was confirmed by sequencing the full-length transcripts after their PCR amplification. Total cellular RNA extraction was performed using the High-Pure RNA Isolation kit (Roche Diagnostics, Mannheim, Germany). Synthesis of cDNA was done with the 'Omniscript Reverse Transcription kit' (QIAGEN, Hilden, Germany) according to the supplied protocol using random hexamers and oligo dT15 primers (Roche, Basel, Switzerland) and 2 mg of total RNA. PCR was carried out in a reaction volume of 25 ml containing 2 ml cDNA, 2.5 ml 106 PCR buffer, 2.0 mM MgCl 2 , 100 nM of each primer, the four deoxynucleoside triphosphates (200 mM each) and 1 unit of InviTaq DNA polymerase (Invitek GmbH, Berlin, Germany). For amplification of the whole coding region two primer pairs were used: forward primer 1: 59-CTTCCG-GGTCACTGCC-39; reverse primer 1: 59 GCTGTGACTGCT-TGTAGATG-39, amplifying a 518-bp fragment of the p53 cDNA; forward primer 2: 59-GTTGATTCCACACCCCCGCCC-39; reverse primer 2: 59-GTGGGAGGCTGTCAGTGGGGA-39 amplifying a PCR product of 782 bp in length. PCR cycling was carried out on a thermal cycler (Eppendorf, Hamburg, Germany). After initial denaturation at 95uC for 5 min, the reaction was carried out at 95uC denaturation for 1 min, 50uC (primer 1)/66uC annealing (primer 2) for 30 s, and 72uC elongation for 90 s for 45 cycles. The extension was lengthened to 5 min for the last cycle. PCR products were stained with Sybr Green and analyzed by agarose gel electrophoresis. After purification using the Qiaex II Gel Extraction kit (Qiagen) samples were sent to Source Bioscience (Berlin, Germany) for sequencing. For the cell lines lacking or expressing too low levels of p53 mRNA direct dideoxynucleotide sequencing of all p53 exons was performed.
p53 transcriptional activity assay
As read-out for p53 transcriptional activity in SCCHN cell lines, basal and irradiation-induced p21 expression levels were determined by quantitative reverse-transcriptase polymerase chain reaction (qRT-PCR). Total cellular RNA extraction was performed using the High-Pure RNA Isolation kit (Roche Diagnostics, Mannheim, Germany). Synthesis of cDNA was done with the 'Omniscript Reverse Transcription kit' (QIAGEN, Hilden, Germany) according to the supplied protocol using random hexamers and oligo dT15 primers (Roche, Basel, Switzerland) and 2 mg of total RNA. The quality of RNA was checked by GAPDH PCR and only samples positive for GAPDH transcripts were used for analysis. Realtime PCR was performed in a reaction volume of 20 ml containing 2 ml cDNA, Light Cycler TaqMan Master (Roche), primers and probes for p21 and the housekeeping gene porphobilinogen deaminase (PBGD) in concentrations recommended by the manufacturer (Real Time Ready Assays, Roche). PCR cycling was performed using the Light Cycler 480 II (Roche). Relative quantification of p21 expression was done by normalization to the expression levels of PBGD using the DC t -method.
Clonogenic survival assays
Cells were seeded into 12-well plates at a density of 300 cells/ well. Twenty four hours after seeding, cells were left untreated or were treated with increasing doses of ATO, irradiation (IR) or the combination of both. Irradiation was performed using a 250 kV deep x-ray unit (Philips RT250) and single doses up to 6 Gy were applied. Non-irradiated cultures were processed along with irradiated cultures. If not stated otherwise, cells were incubated with ATO 2 hs before irradiation for combined treatment. Cells were then incubated for up to 14 days. At the end of the experiments colonies were fixed, stained using 10% Giemsa stain solution and colonies containing .50 cells were counted. Plating efficiency and survival fractions for given treatments were calculated on the basis of survival of non-treated cells. All samples were done in triplicates and at least three independent experi-ments were carried out. The median inhibitory concentration (IC50) of ATO and the combination index (CI) for the combined treatment with ATO and IR were calculated using the CalcuSyn software version 2.1 (Biosoft, Cambridge, UK).
MTT cell viability assay
The 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide (MTT) assay was performed in order to determine the sensitivity of cisplatin-resistant FaDu cells (FaDu CDDP-R ) and cetuximab-resistant UT-SCC-9 cells (UT-SCC-9 CET-R ) as well as their parental sensitive counterparts to ATO treatment. Briefly, cells were seeded in 96-well plates. After 24 hs, ATO was added and cells were incubated for up to 10 days. Cell viability was assessed by measuring the absorbance of the formazan solution. Each sample was analyzed in six technical replicates and the experimental series were repeated for four times. Survival fractions were calculated on the basis of untreated cells.
Reconstitution of wt p53 in SCC9 cells
SCC9 cells were stably transfected with the tetracyclineinducible expression vector pRTS1 [26], either encoding for wt p53 (SCC9-wtp53) or as an empty vector (SCC9-vc). Briefly, plasmid DNA (20 mg) was transfected into 10 6 cells by electroporation in 400 ml Optimem (Invitrogen) at 250 V and 975 mF using a Biorad electroporation apparatus. Immediately after electroporation, cells were resuspended in MEM growth medium supplemented with 10% FCS and were allowed to recover for 48 hs at 37uC and 5% CO2. Selection of transfected cells was performed by their subsequent cultivation for 4 weeks in complete growth medium additionally supplemented with hygromycin B (Calbiochem) to a final concentration of 70 mg/ml. For activation of conditional gene expression of wt p53, cells were treated with doxycycline (Dox) at the indicated concentrations.
Immunoblotting
The expression levels of p53 and its downstream target p21 in SCC9-wtp53 and SCC9-vc cells were assessed by standard immunoblotting. Briefly, cells were treated with doxycycline as indicated in the figure legends. Standard SDS-polyacrylamide gel electrophoresis was performed using 60 mg of total protein per cell lysate, followed by transfer to PVDF membranes (EMD Millipore, Billerica, MA, US). The following antibodies were used for detection: mouse anti-human p53 (clone DO-1, Santa Cruz, Santa Cruz, CA, USA), mouse anti-human p21 (clone Ab-1, Calbiochem, EMD Millipore Corporation) and peroxidase-conjugated goat anti-mouse IgG (Jackson ImmunoResearch Laboratories, West Grove, PA, USA). The immunoreactivity was detected using the ECL plus Western Blot detection system (Amersham Biosciences, GE Healthcare Europe, Freiburg, Germany).
Detection of apoptosis by assessing annexin-V-FITC and propidium iodide
Cells were seeded into 12-well plates at a cell density of 3610 4 cells/well. Twenty four hours later, cells were treated for 96 hs with ATO, IR and the combination of both at defined concentrations and doses. At the end of the experiment, cells were harvested by trypsinization and washed in three subsequent washing steps with culture medium, phosphate-buffered saline (PBS) and annexin-V binding buffer (ABB, 10 mM HEPES/ NaOH, pH 7.4; 140 mM NaCl; 2.5 mM CaCl 2 ). Phosphatidylserine on the outer leaflet of the plasma membrane as specific marker of apoptotic cells was detected by staining of cells in ABB containing annexin-V labeled with fluorescein isothiocyanate (FITC) at a concentration recommended by the manufacturer (Alexis Biochemicals, ENZO Life Sciences, Exeter, United Kingdom). For the discrimination of apoptotic and necrotic cells, the cell-membrane impermeable dye propidium iodide (PI, final concentration: 1 mg/ml) was added to the staining solution. After staining for 15 minutes, cells were immediately analyzed using a FACSCanto II cytometer (BD Biosciences Europe, Heidelberg, Germany). At least, 10,000 events were recorded. Data analysis was performed with BD FACSDiva Software v6 (BD Biosciences).
Assessment of cell cycle distribution
Cells were harvested by trypsinization, washed and re-suspended in 0.5 ml PBS. Fixation was performed by drop-wise addition of an equal volume of ice-cold ethanol. After washing with PBS cells were re-suspended in 0.5 ml propidium iodide (PI) staining solution (20 mg/ml PI, 0.1% (v/v) Triton-X, 200 mg/ml DNAsefree RNAse in PBS). Samples were stored overnight at 4uC and analyzed on the next day. The relative number of cells in the G0/ G1 and G2/M phases of the cell cycle and the number of apoptotic cells with DNA fragmentation (sub-G1 peak) were determined by flow cytometry. Determination of surface TRAIL receptors and nuclear gamma-H2AX by flow cytometry Cells were incubated with different concentrations of ATO for 4 to 48 hs, harvested by trypsinization and fixed by ethanol. Cells were incubated with antibodies in staining buffer (1% bovine serum albumin and 0.2% (v/v) Triton-X in PBS) for 20 min and were subsequently analyzed using the FACSCanto II cytometer. The following antibodies were used for staining: mouse anti-TRAIL-R1 (clone DJR1) and TRAIL-R2 (clone DJR2-4, both PE-labeled, eBioscience, Hatfield, UK); mouse anti-p-H2AX (clone JBW301, Merck-Millipore, Darmstadt, Germany), and rabbit anti-mouse Alexa Fluor 488 (Life Technologies, Darmstadt, Germany).
Statistical analysis
All statistical analyses were performed using StatView Software (SAS Institute Inc., Version 5.0.1). The significance of differences in the distribution of cells in the cell cycle and the extent of apoptosis were determined using the paired t-test. The differences in clonogenic survival after treatment of p53-deficient andproficient cell lines with ATO were evaluated for significance using the ANOVA unpaired t-test. The level of significance was set at p,0.05.
Results
The p53 functional status of SCCHN cells correlates with their sensitivity to ATO treatment In order to determine whether the p53 status interferes with the cytotoxic and growth-inhibitory activity of ATO we first confirmed the previously reported TP53 genotype of the SCCHN cell lines by sequencing (Table 1). In addition, we assessed the p53 transcriptional activity in each cell line using IR-induced expression of the p53 target gene p21 as a functional read-out [27]. The p21 expression levels before and 4 hs after IR of cells with a single dose of 6 Gy were quantified by qRT-PCR. This analysis revealed the absence of any p21 upregulation in cell lines with deletion or mutation in the TP53 gene while a 2-to 15-fold induction was observed in cell lines with wt p53 (Figure 1 A, Table 1). Although no genetic lesion in the complete coding sequence of p53 could be detected we failed to observe any significant IR-induced induction of p21 in UM-SCC-25 cells (Table 1). We therefore allocated this cell line to the p53-deficient group (Figure 1 A).
Subsequently, we compared the effect of ATO treatment on clonogenic survival in the group of p53-deficient and p53proficient SCCHN cell lines. As seen in Figure 1 B, treatment of p53-deficient SCCHN cells with submicromolar doses (75-300 nM) of ATO reduced their clonogenic survival in a dosedependent manner. In SCCHN cell lines expressing functional p53 the same doses of ATO only slightly interfered with their clonogenic survival (Figure 1 C). The mean inhibition of clonogenic survival at 300 nM of ATO was 62% in the p53deficient group (range: 30% to 90%) and 12% (range: 9% to 15%) in the p53-proficient group which was significantly different (ANOVA unpaired t-test, p = .015). The inhibitory effect of ATO on clonogenic survival in the p53-deficient group was independent of whether the genetic lesion in the TP53 gene consisted of a deletion of exons 2-9 leading to a p53-null phenotype (UT-SCC-9), a frameshift mutation leading to truncated p53 protein (UD-SCC-4, SCC9) or a single missense mutation within the DNAbinding domain of p53 (UD-SCC-5, UM-SCC-11B, FaDu) and was also observed in cells lacking any genomic lesion within the TP53 coding sequence but expressing p53 transcripts without transactivation activity (UM-SCC-25).
In order to establish a causal relationship between p53 deficiency and increased sensitivity of SCCHN cells to ATO we assessed whether reconstitution of wt p53 in p53-deficient SCC9 cells would decrease their sensitivity to ATO treatment. SCC9 cells were stably transfected with the tetracycline-inducible expression vector pRTS1 [26], either encoding for wt p53 (SCC9-wtp53) or as an empty vector (SCC9-vc). In the absence of doxycycline (Dox), expression of p53 and p21 was not detectable in SCC9-wtp53 cells. Addition of Dox led to a significant induction of wt p53 expression in a dose- (Figure 2 A)
Combined treatment of SCCHN cells with ATO and IR inhibits their clonogenic survival in an additive manner
Based on previous reports of a significant radiosensitizing activity of ATO in a xenograft model of oral squamous cell carcinoma [9] we next asked whether we could observe such effect in our SCCHN cell lines as well and whether the radiosensitizing activity of ATO would also depend on the p53 status. When cells were treated with ATO 2 hs before IR their clonogenic survival was inhibited more effectively than by either treatment alone. After correction for the cytotoxic activity of ATO itself no significant radiosensitizing activity, neither in p53-deficient nor p53-proficient SCCHN cell lines with the exception of the UM-SCC-25 cell line, could be observed (Figure 3). The calculated combinatory indices (Table 1) indeed suggested an additive but not synergistic effect of the combination regimen in 9 of 10 cell lines. Since there is evidence from previous studies that the interaction between ATO and IR could depend on the sequence of their combination we also treated cells with ATO 2 hs after IR. Again, only additive effects of the combined treatment were observed (data not shown).
Cell cycle arrest, residual DNA double strand breaks and apoptosis by ATO in p53-deficient SCCHN cells
Considering the manifold interactions of ATO with diverse cellular functions [28] we were then interested in its mechanisms of action in SCCHN cells lacking functional p53. In order to characterize direct effects of the drug we chose shorter incubation times than in the clonogenic survival assays and tested a broader concentration range of ATO in this set of experiments. Using FaDu cells as a model for p53-deficient SCCHN cells, we first determined the influence of ATO on cell proliferation. As shown in Figure 4 A, the exponential increase in cell numbers over time was significantly inhibited by ATO at a concentration of 500 nM and completely blocked by 1 mM of ATO. A comparable dosedependent inhibitory activity could also be observed when the metabolic activity of FaDu cells was determined using the MTT assay (data not shown).
We further assessed whether the observed inhibition of cell proliferation was mediated by a blockade in cell cycle progression or was a result of direct induction of apoptosis. After treatment with ATO for 96 hs we observed a dose-dependent arrest in the G2/M phase of the cell cycle in the p53-deficient FaDu but not in p53-proficient UD-SCC-2 cells (Figure 4 B, C). In line with the results from the clonogenic survival assay, the combination of ATO with IR increased the inhibitory effect on cell cycle progression in an additive manner in the p53-deficient but not -proficient cells (Figure 4 C). Beside the effect on cell cycle direct induction of apoptosis by ATO alone (Figure 5 A) and in combination with IR (Figure 5 B) was observed and again, the p53-deficient FaDu cells were significantly more sensitive to the pro-apoptotic activity of ATO than the p53-proficient UD-SCC-2 cells.
Increased cytotoxic activity of ATO has previously been linked with increased activation of the extrinsic cell death program via TRAIL receptors [11,12,29] and reduced capability of tumor cells to repair DNA double strand breaks [19]. In order to assess any potential interference of ATO with these cellular programs in SCCHN cells, we treated p53-deficient (FaDu) and proficient cells (UD-SCC-2) with ATO for 48 hs. We then evaluated any potential changes in their surface expression of TRAIL-R1 and TRAIL-R2 by flow cytometry. No basal expression of TRAIL-R1 and TRAIL-R2 was found in any of the two cell lines. After treatment with ATO the expression of both death receptors was induced in the p53deficient but not in the proficient cell line (Figure 5 C). Assessment of nuclear gamma-H2AX as a specific marker for DNA double strand breaks revealed reduced repair capacity of the p53-deficient compared to the p53-proficient cell line (Figure 5 D).
The growth-inhibitory effect of ATO in SCCHN cells with acquired cetuximab and cisplatin resistance
Cisplatin (CDDP) and cetuximab are the two major components of concurrent radiochemotherapy for first-line treatment of primary SCCHN. Since the 5-year recurrence rates after radiochemotherapy are still considerably high and since treatment most probably selects for tumor cells with resistance to the respective agents, we evaluated the potential of ATO for treatment of recurrent disease in two SCCHN models of acquired resistance to CDDP and cetuximab. These models had been established by long-term treatment with increasing concentrations of these drugs. Assessment of viability using the MTT assay after long-term drug treatment revealed that the phenotype of acquired resistance was stable up to a minimum of 6 months after stopping the selection process by removing the drug from the cultures. A significant difference in the sensitivity of resistant subclones (UT-SCC-9 CET-R , FaDu CDDP-R ) compared to the parental cells (UT-SCC-9 CET-S , FaDu CDDP-S ) could be observed (Figures 6 A, B, right panels). In the model of acquired cetuximab resistance we observed a significant and pronounced increase in the sensitivity of cetuximab-resistant SCCHN cells to ATO treatment (Figure 6 A). In contrast, CDDP-resistant FaDu CDDP-R cells were cross-resistant to ATO treatment (Figure 6 B).
Discussion
In this study, we could demonstrate that ATO at doses below the clinically achieved plasma levels of current ATO-containing treatment regimens in APL [30] displayed significant growthinhibitory and cytotoxic activity preferentially in p53-deficient SCCHN cells and increased the inhibitory effect of ionizing radiation on clonogenic survival in an additive manner. The addition of ATO to current treatment regimens could thus represent a potential treatment strategy to improve the therapeutic outcome of SCCHN patients with p53-deficient tumors.
Although mutations within the TP53 gene are considered the most frequent [31,32] and one of the earliest genetic alterations [33,34] in the carcinogenesis of SCCHN their prognostic value is still a matter of debate. This is mainly due to the small number of patients, the lack of a focus on a particular tumor site and the methodological differences in the assessment of TP53 mutations in the majority of the published studies so far precluding a conclusive meta-analysis [35]. Nonetheless, there is accumulating evidence that patients presenting with tumors harboring disruptive [13], truncating [15] or loss-of-function mutations in the TP53 gene [36] belong to a group of patients with poor prognosis and increased risk of treatment failure [13,15,16,36]. TP53 mutations are highly enriched in patient cohorts of HPV-negative carcinomas [37]. Results from large clinical trials revealed impaired efficacy of all components of the state-of-the-art SCCHN treatment regimes in patients with HPV-negative carcinomas and this has been linked at least in part with p53 deficiency [38]. These patients could potentially benefit from novel combinatory regimens including ATO.
In our cell line model, we observed an additive but not synergistic interaction between ATO and IR. This was not surprising since former in vivo studies could already demonstrate that the radiosensitizing effect of ATO is mainly based on its interaction with the tumor microenvironment and not a direct modulation of the cellular radiosensitivity of tumor cells [4,7]. In murine xenograft models, an immediate vascular shutdown followed by extensive central necrosis after a single application of ATO has been reported [4]. This antivascular effect of ATO was accompanied by an increase in the intratumoral levels of the known vasoactive mediator TNF-a [4] and was associated with an increased therapeutic efficacy of radiotherapy. Comparable results were reported for fibrosarcoma xenografts in which the vasculardamaging activity of ATO was mainly observed in regions of low pH and poor oxygenation [7], conditions inherent to tumor sites. These features of ATO together with its preferential activity in p53-deficient SCCHN cells reported here strongly suggest a favorable therapeutic window for the combination of ATO and IR in p53-deficient tumors.
The molecular basis for the preferential sensitivity of p53deficient tumor cells to ATO treatment remains largely elusive. Distinct effects of ATO on G1 and G2 cell cycle checkpoints leading to differences in the extent and duration of the G2/M arrest and the induction of mitotic arrest-associated apoptosis in p53-deficient and p53-proficient cells have previously been reported for the model of multiple myeloma [11] and the Li-Fraumeni syndrome [39] and comparable results were obtained from studies of other DNA-damaging agents such as paclitaxel [40]. In line with these reports we observed a more pronounced G2/M arrest and increased relative numbers of apoptotic cells in p53-deficient compared to p53-proficient SCCHN cell lines. We also confirmed the previously reported upregulation of TRAIL receptors [11,12,29] and accumulation of DNA double strand breaks after ATO treatment [19]. As for the G2/M arrest, these molecular changes were more pronounced in p53-deficient SCCHN cells. A more detailed analysis of changes in the expression or activation status of regulators of cell cycle, apoptosis and DNA repair will be necessary to reveal the molecular determinants of increased ATO sensitivity in p53-deficient SCCHN cells.
We could demonstrate that assessment of p53 functional activity enabled a better prediction of ATO sensitivity than the p53 mutational status. However, the integration of the analysis of IRinduced p21 expression in the clinical algorithm of individual treatment selection seems difficult given that tumor tissue harvested before and after the first radiation would have to be analyzed. As an alternative, vital tissue sections prepared from surgical specimens of SCCHN patients might be more suitable for ex vivo assessment of p53 functionality and identification of ATO-sensitive tumors before starting treatment. A pilot study for the evaluation of p53 functionality as predictive pretreatment biomarker as opposed to sole assessment of HPV status has been initiated in our laboratory.
The analysis of our cell line models of acquired drug resistance revealed that cetuximab resistance was associated with increased while CDDP resistance was correlated with decreased sensitivity to ATO. This interesting observation certainly deserves confirmation in additional models of cetuximab resistance. Transient activation of the EGFR signaling pathway in normal and tumor cells upon exposure to arsenic has been reported in several studies [17,19,[41][42][43]. This cellular response has been shown to antagonize the ATO-induced apoptotic response, thereby contributing to the insensitivity of solid tumors to ATO treatment [17,19,42]. Cetuximab resistance which is characterized by similar molecular features, such as increased activation of the downstream effector kinases Src, PI3K and AKT in the EGFR signaling pathway, should therefore rather be linked to ATO resistance. Future studies will be needed to elucidate the molecular basis for the unexpected contrary correlation in our study.
Our results of a cross-resistance between CDDP and ATO in SCCHN cells are in line with the results from a previous study in bladder cancer [44]. As one potential mechanism, upregulation of protective anti-oxidative enzymes which has been associated with CDDP resistance [45,46] as well as resistance to arsenicals [47] might be involved in the observed cross-resistance.
In conclusion, we identified ATO as potentially valuable drug for treatment of p53-deficient SCCHN, recommending its further evaluation for HPV-negative SCCHN given the high frequency of TP53 loss-of-function mutations in this patient subset. Its previously reported synergistic activity with radiotherapy in vivo strongly supports preclinical and phase I clinical evaluation of this treatment combination in the future.
|
2018-04-03T00:20:44.718Z
|
2014-06-13T00:00:00.000
|
{
"year": 2014,
"sha1": "d6c6b8f1528722ca942aad4e6cd5ac0e9f61de1e",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0098867&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d6c6b8f1528722ca942aad4e6cd5ac0e9f61de1e",
"s2fieldsofstudy": [
"Medicine",
"Chemistry"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
}
|
244482818
|
pes2o/s2orc
|
v3-fos-license
|
Factors Associated with the Current State of Food Safety Knowledge and Practices among the ‘Doi’ Workers in Bogura, Bangladesh: A Cross-sectional Study
'Doi,' or yogurt, is a traditional dairy product in Bangladesh. Bogura's 'Doi' is the most popular of all 'Doi' items throughout the country. The state of food safety in the 'Doi' business is of great concern because this product is consumed by a vast number of people. The current study aims to evaluate the food safety knowledge and practice of the ‘Doi’ workers in Bogura as well as the associated factors. In this cross-sectional study, 150 people participated voluntarily and answered a structured questionnaire. The final result showed that the current state of their food safety knowledge (4.7±2.9; scale=15) and practices (21.5±6.2; scale=60) was not satisfactory. It was also observed that level of education, job hours, and training experience all had a substantial impact on knowledge and practices. Participants with a high level of food safety knowledge had 5.5 times more desired food safety practices than their peers. Therefore, the current findings emphasize the need of food safety trainings, certification, and employing educated personnel in the 'Doi' sectors.
I. INTRODUCTION
Bangladesh has an eclectic collection of traditional food products which are well known for their unique taste and nutritional value and certainly, these products are very suitable for the global market. 'Doi' (Yogurt) of Bogura is one of them. 'Doi' is a completely fermented dairy product made from cow's milk or skim milk, with sugar or in some special cases without sugar. Generally, yogurt is regarded as ready-to-eat food that is regularly consumed around the world for energy production and good health [1]. It is a well-balanced diet that contains nearly all of the nutrients present in raw milk and is a strong source of probiotics [2]. Due to its physiological, nutritional, and beneficial properties, it is considered a highly demanded and widely accepted popular drink [3]. When people of Bangladesh think about any type of yogurt, the 'Doi' of Bogura comes first to mind and all over the country it is commonly recognized as 'Bogurar Doi' as it is the best in taste and quality. This product has 250 years of strong historical background behind its birth. It is anticipated that skilled artisans and the environment (weather, water, and soil) of Bogura which is favorable for producing good quality milk, are the reasons for the uniqueness of 'Doi' of Bogura.
The production and commercialization of Bogurar Doi in Bangladesh are rapidly growing on a regular basis along with its increasing demand and popularity. 'Doi' (Yogurt) is produced and consumed throughout the country, but 'Doi' of Bogura is the best in terms of taste and acceptance. Because of the inimitable savors and distinctive characteristics, the product has a huge possibility of capturing the global market. But various factors such as lack of research to extend the shelf-life of the product, inappropriate handling during processing, absence of a proper hygienic condition, and so on make this task quite difficult. In fact, the significance of food safety knowledge both from the producers' and workers' points of view in the processing of dairy products is inevitable. A study showed that the microbiological quality of the raw milk and 'Doi' samples collected from different regions of Bangladesh were not satisfactory because of their high bacterial loads and precaution is needed to the management of raw milk and yogurt [4]. On the other hand, milk and yogurt, are very susceptible to bacterial contamination and thus readily perishable [5]. Dairy products are also considered potentially hazardous if the processing occurs in nonconforming conditions. Thus, they are classified as a highrisk food commodity. Microorganisms from different sources including personnel, water, equipment, additives, and packaging materials can contaminate dairy products during processing [6]. As a result, milk and milk-derived products can contain a wide range of microorganisms and serve as implicated food vehicles of foodborne diseases. In the case of yogurt, it was found that in Nigeria yeast and mold are considered primary contaminants, and fungi thrive in acidic environments and multiply rapidly [7]. Another study reveals that Campylobacter, Brucella, Salmonella, Listeria monocytogenes, Escherichia coli, and Shigella are only a few bacteria that have the ability to contaminate different dairy products and may lead to death [8].
A fundamental prerequisite for human health is access to adequate safe food. Food contamination and adulteration have become serious public health concerns; hence, food safety is an unceasing public health issue today [9,10]. The risks of contaminated and unsafe food are substantial and responsible for many life-threatening diseases, ranging from diarrhea to different variants of cancer [10,11]. In Bangladesh, foodborne disease is considered an alarming problem because a growing number of consumers suffer from a variety of health problems by consuming adulterated food [12]. As food production operators are the first line of defense to provide safe food to consumers, the employees play a vital role in different stages of processing in preventing foodborne disease outbreaks. Lack of ample knowledge related to food safety can make the person handle the food in an improper way which results in producing unsafe food. Along with this, improper food handling techniques can lead to foodborne disease and contamination, which can harm consumers' health in the long run [13]. The study also discovered that a considerable percentage of foodborne diseases emerge as a result of workers' improper food handling practices and illness, and that these are the fundamental causes of foodborne disease outbreaks [14]. Therefore, ample knowledge regarding food safety issues and hygiene practices is very important for food industry personnel. However, in the current aspect of Bangladesh and for commercial food entities here, this is a matter of great concern. According to the author acquaintance, limited researches have been conducted on dairy farm workers and no research on food safety knowledge and hygiene practices among 'Doi' sector employees occurs in Bangladesh. To enable and facilitate the management of this important 'Doi' sector, we feel an urge to comprehend and evaluate the worker's current condition of food safety knowledge and practices. We believe that this study will support the higher management as well as the government to set standards and develop useful strategies to uphold Bangladesh's 'Doi' sector globally. Hence, the objectives of this present research were to evaluate the food safety knowledge and practices among the 'Doi' workers in the Bogura district of Bangladesh, as well as to determine the factors contributing to the current situation.
A. Study Place and Period
The majority of 'Doi' producers operate within the Bogura city and surrounding areas. So, the current study was carried out in the Bogura district from January to May 2020. Almost all of Bogura's popular 'Doi' enterprises participated in the survey (22 in number). The first six months of the year are peak months for 'Doi' manufacturing, with the greatest number of workers employed in the related companies. When winter approaches, the amount of 'Doi' manufacture decreases, and the number of workers decreases as well. As a result, we chose the pick period for this study in order to interview as many workers as possible.
B. Study Design and Data Collection
A structured and self-administrated questionnaire was established for conducting the study. For convenience, two versions (English and Bengali) of the questionnaire were prepared. One hundred and fifty (150) people took part in the study. All respondents voluntarily participated in the ongoing study, and enough time (60 minutes) was granted to complete the questionnaire. The questionnaire came with a brief completion instruction explaining the purpose of the study as well as directions on how to fill it out. According to the signed consent, participation in this study remained confidential because a consent form was acquired with a signature from each respondent before the survey. A pilot study was conducted before the main work on 40 workers (chosen at random) to determine the clarity of the questions, time management, and consistency. The first section of the questionnaire was designed to collect demographic information. The second section of the questionnaire was designed to accumulate information about food safety knowledge and practices. There were 15 closed-ended questions in both the knowledge and practice sections. In the knowledge part, three possible response choices were presented (true, false, don't know), and practices (never = 0, rarely = 1, sometimes = 2, often = 3, always = 4). The response options were chosen in this manner to restrict the likelihood of selecting the correct answer by chance. The scores were converted to 0 to 100 points. A score above 60% was considered a good score and below that was considered poor. The final score was computed by adding all of the correct answers together. Participants were considered eligible for the study if they met the following criteria: (i) they had direct contact with the process, (ii) they had at least 6 months of work experience in 'Doi' production, and (iii) they were free of any disability or sickness. The food safety knowledge score varied from 0 to 15, with a score of 8 or above indicating a good level of knowledge and a score of less than 8 indicating a poor level of knowledge. On the other hand, the practices score varied from 0 to 60, with scores below 30 indicating poor practices and scores 30 or higher indicating a satisfactory level of practice.
C. Statistical Analysis SPSS (version 23.0) software was used to analyze the data. To summarize variables of relevance, descriptive statistics (e.g., response percentage, mean, and standard deviation) were employed. Analytical statistics, including bivariate analyses and multiple logistic regression models, were utilized to discover characteristics related to meat handlers' food safety knowledge and practices. The demographic variables were involved in both univariable (unadjusted) and multivariable (adjusted) logistic regression models, with the exception of gender and health certificates (which were eliminated because of the lack of variance in categories). Odds ratios with 95% confidence intervals (CI) were used to analyze the strength of the connection between independent variables (such as age and education level) and dependent variables (food safety knowledge and practice).
D. Ethical Approval
The ethical committee of Sylhet Agricultural University approved the design of the study. Respondents were also asked to provide informed consent. There was no use of personal information during the study. All data were kept on a password-protected computer that only the research team had access to. Table I depicts the demographic profiles of the 'Doi' workers, where all 150 food handlers who participated in this study were men. The mean age of the respondents was 32 years (SD = 9.6), with a range of 16 to 66 years. Almost one-fourth of those polled (n = 33, 22.0%) have no formal education. The highest number of respondents (n = 59, 39.3%) showed to have primary education. More than half of respondents (n = 80, 53.3%) had worked for more than 5 years; but none had food safety training and no health certificates. Most of the participants were helper (n = 110, 73.3%), and the rest were cooker (n = 40, 26.7%). Table II summarizes the assessment of 'Doi' handlers' food safety knowledge. The majority of participants reported having a fair level of awareness of general hygiene and sanitary procedures in the workplace, such as washing hands before work (96%), wearing gloves (90%), and food storage knowledge (89.3%). Almost all (93.3%) of those polled either did not know or answered wrong regarding the food poisoning problem. Overall, the majority of respondents admitted to being unaware of specific foodborne diseases and pathogens. Approximately 98% of respondents did not answer correctly or did not know much about environmental hygiene-product relationships, and pathogenic bacteria elimination. However, everyone seemed to be knowledgeable about cleaning the process area and equipment before production for safe operation. 69 (46.0) Cleaning of processing areas and equipment is a must in order to guarantee food safety before and after production. On a scale of 15.0, the mean score for food safety knowledge was 4.7 (SD = 2.9). About 20% (95% CI 15.7-24.7) of the participants had knowledge about good food safety. Working hours per day (p < 0.001), and food safety training (p < 0.001) were found to be significantly linked with food safety knowledge using the Chi-square test (Table 3). Table III contains the findings of multiple logistic regressions that predicted the factors related to high levels of food safety awareness among study participants. According to the adjusted regression model, respondents who had higher secondary education [adjusted odds ratio (AOR) = 4.55, 95% CI 1.13-18.77], had a work experience of > 10 years (AOR = 9.33, 95% CI 1.93-15.10), worked for ≥ 8 h per day (AOR = 6.09, 95% CI 2.61-13.02), and had food safety training (AOR = 8.96 95% CI 2.15-27.32) were more likely to possess a good level of food safety knowledge compared to their counterparts. Table 4 displays an evaluation of the food safety practices of the participants. Approximately half of those surveyed (46-48%) indicated they ate, drank, and smoked in 'Doi' processing floors on occasion. Most of them (93.6%) said they never or rarely donned the apron or wore a hair cover at work. More than half of those polled (53.5%) said they worked while they had diarrhea on occasion (Table IV). We discovered a worrisome fact: very few people acknowledged washing their hands before touching items (n = 12, 8%), washing hands after using the toilet (n = 17, 11.3%), and using sanitizer after washing hands (n = 4, 2.7%).
C. Food Safety Practices and Their Associated Factors among 'Doi' Workers
The mean score of food safety practice was 21.5 (SD = 6.2) on a scale of 60.0. Merely 16.3% (95% CI 12.3-20.7) of the workers stated a good level of food safety practices ( Table V). The number of hours worked per day (p < 0.001), food safety training (p = 0.018), and level of food safety knowledge (p < 0.001) were found to be significantly related with food safety practices (Table V). The findings from the multiple logistic regression analyses can be observed from table 5. According to adjusted regression analyses, the likelihood of getting a satisfactory level of practice were nearly 8.5 times greater among survey participants who worked 8 hours per day contrasted to those who worked 8 hours per day (AOR = 8.47, 95% CI 3.13-22.95). When compared to their counterparts, individuals with a high level of food safety knowledge had 5.5 times higher food safety practice (AOR = 5.63, 95% CI 2.29-13.81).
IV. DISCUSSION
In this current research, the features linked with food safety knowledge and practices among 'Doi' workers in Bogura, Bangladesh were evaluated. Consequently, 'Doi' handlers in this study had low food safety knowledge and practice. Respondents were, however, more aware of certain food safety issues than others within the particular areas. For example, the majority of responders were conscious that hand washing prior to work, wearing gloves and aprons, and thoroughly cleaning instruments reduce the hazard of food contamination. They were, however, less knowledgeable with high-risk food poisoning categories, as well as specific foodborne diseases and microorganisms. A survey of dairy workers in northern China found that they have low levels of knowledge but acceptable attitudes and behavior [15]. However, the Bogura 'Doi' workers' understanding of several pathogenic microorganisms differs from the study of Young [16]. In addition, a study of small-scale dairy producers in Tajikistan's urban and peri-urban areas indicated that the farmers were similarly unaware of the causal microorganisms [17].
In terms of food safety knowledge-related factors, our findings discovered that education, work experiences, and training were all strongly accompanying to food safety knowledge. According to our findings, 'Doi' handlers with higher secondary education and some form of food safety training were more likely to have a high level of food safety awareness. Some comparable studies conducted in China and Canada found that the educational level and professional relevant training of dairy and cheese handlers was substantially related to their level of knowledge and food safety practices [15], [16], [18]. 'Doi' handlers may be subjected to food safety issues such as washing hands, using gloves and aprons, proper cleaning of the instruments, and identifying the pathways through which milk may be contaminated during the handling process through education and professional training. As a result, people who have received more education and training may have a better grasp of food safety. This finding stresses the importance of increasing higher education in Bangladesh, as well as providing regular training for 'Doi' handlers. The great majority of 'Doi' employees stated that they had no food safety or hygiene training. Given the scarcity of 'Doi' handler training programs in Bangladesh, it is vital that all 'Doi' handlers get such food safety and hygiene training. 'Doi' handlers with more than ten years of experience were more likely to be informed about food safety. However, this conclusion contradicts a comparable study [19], which found no significant relationship between years of experience and food safety knowledge. According to our findings, having a high degree of food safety knowledge can lead to good food safety practices. This study discovered that people who were aware about food safety were 5.7 times more likely to practice food safety. These findings strongly suggest that raising food safety awareness is a key component of encouraging appropriate food handling practices (e.g., hygienic and safe 'Doi' production).
V. CONCLUSION
According to the current study, 'Doi' workers in Bogura, Bangladesh currently lack adequate food safety knowledge, and relevant practices are subpar. As a result, it is obvious that some efforts must be taken to improve their knowledge and habits. We discovered that educational status, food safety trainings, and working hours are all significantly related to their knowledge level, and that good food safety knowledge influences food safety practices. To some extent, employing educated people in the 'Doi' companies can overcome this problem. Furthermore, the companies must provide comprehensive food safety training to their personnel on a regular basis. In order to improve the competencies of 'Doi' workers of Bogura, this training program should be structured with specific guidelines that indicates the food safety and hygiene issues. This will necessitate the Bangladesh Food Safety Authority (BFSA) incorporating regular training sessions for dairy handlers into their main missions of regulating and managing the dairy industries. On the other hand, "Doi" enterprises must pursue multiple food safety certification systems like HACCP, ISO, HALAL etc. in order to make their products employable in worldwide markets. This 'Bogurar Doi' may be qualified for the geographical indication (GI) act as a traditional product with special attributes.
ACKNOWLEDGMENT AND FUNDING
This work was supported by the University Grants Commission (UGC) of Bangladesh. Beside this, we sincerely acknowledge the participation of the 'Doi' workers of different manufacturers in Bogura, Bangladesh.
DECLARATION OF INTEREST
We declare that there are no conflicts of interest.
|
2021-11-23T16:09:02.815Z
|
2021-11-09T00:00:00.000
|
{
"year": 2021,
"sha1": "5b118e0f1e08c08c419ffe0f3e5ec0f54030a9ae",
"oa_license": "CCBYNC",
"oa_url": "https://www.ejfood.org/index.php/ejfood/article/download/397/208",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "9d23e0b79bf3b9b53aaf15c440dcb6d264ab7327",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": []
}
|
24369504
|
pes2o/s2orc
|
v3-fos-license
|
Serum zinc and co pper levels in children with febrile convulsion
Pharmaceutical Sciences Research Center, Department of Toxicology and Pharmacology, Faculty of Pharmacy, Mazandaran University of Medical Science, Sari, Iran Department of Pediatrics, Faculty of Medicine, Mazandaran University of Medical Science, Sari, Iran Pharmaceutical Sciences Research Center, Department of Clinical Pharmacy, Faculty of Pharmacy, Mazandaran University of Medical Science, Sari, Iran Mazandaran University of Medical Science, Sari, Iran
Introduction
Febrile seizures or febrile convulsions (FC) are the most common neurologic disorder of infants and children 6 through 60 months of age. They are age-dependent phenomenon, occurring in 2 to 5 percent of children younger than six years of age and are usually associated with fever (a temperature greater than 38 °C) but without evidence of intracranial infection or defined cause (1). If convulsion lasts more than 5 minutes, complications such as mental disability, hemiplegia and death will threaten children. Despite the fact that the exact mechanisms of fever and seizure genesis are not known yet, many etiologic factors contribute in creating it and the occurrence of fever alone does not result in convulsion in this group. In other hands, fever in these children is necessary but not enough. It has been proved that genetics plays a meaningful role in seizure type as a triggering factor (2).
Besides genetic factor, family background, immunologic disorders, iron deficiency, neural intermediaries' changes and trace elements effective on these intermediaries have been recognized to involve in this disease except of metal elements (3)(4)(5). Zinc (Zn) and copper (Cu) (human body basic cations) play role as cofactors in more than 300 enzymatic activities significantly (6). Zn ion is a necessary element with high importance for brain natural development (7) especially gamma-aminobutyric acid (GABA) pathway that reduction of its activity can create convulsion (8). Hypozincemia activates the NMDA receptor, one of the glutamate families of receptors, which may play an important role in the induction of epileptic electrical discharges (8). Fever is a clinical signal that is characterized by rising body temperature more than normal level.
Hypothalamus controls the central body temperature in normal conditions, and set within the normal range (36.5-37.5 °C). An exogenous pyrogen (external feverinducing substance such as gram-negative bacteria lipopolysaccharide) or endogenous ones (such as interleukin-1) caused fever by acting directly on the hypothalamic thermoregulatory center and then rise body temperature by releasing epinephrine, vessels contraction (particularly peripheral vessels), finally reach a new regulation point and fever occurs (9,10). Considering of febrile seizures' incidence and probably their complications, high hospitalization costs, and ability to fear parents, identification of causes for their prevention are very essential. This study was evaluated the relationship serum levels of Zn and Cu with seizure occurrence and fever intensity in febrile children.
Materials and methods
In this case-control study, serum Zn and Cu levels of 270 children with febrile seizure, referring to a teaching hospital (Bu-Ali Sina, Sari, Iran), during 2 years evaluated. The study was approved by the Ethical and Research Committee of Mazandaran University of Medical Sciences (No: 88-142). Patients were 6 month to 6 years age bracket (samples number has been calculated based on previous studies sample volume & sample volume formula) (3)(4)(5). After explaining to parents and getting their consent, cases entered to study and examined by a pediatric neurology specialist to place in one of 3 groups: a) children with febrile convulsion, b) febrile children without convulsion and c) healthy ones (without fever and convulsion). The exclusion criteria for patients in this study were including age younger than 6 months and older than 6 years, mental or cerebral retardation or sings of genetic syndrome, complex convulsion (atypical), chronic disease (heart, liver, kidney), malnutrition and situations that lead to decrease study metals levels in serum including hemolysis, dehydration, vomiting, dysentery and pneumonia. After physical exams and measuring the body temperature to confirm the fever of case and controls, 5 mL blood was taken from peripheral vessels at the first 12 hours of hospitalization.
Statistical analysis
Data were analyzed by SPSS16 software (Chicago, USA), independent samples t-test and ANOVA were used to compare serum levels between study groups and Pvalue ≤0.05 was considered statistically significant.
Results
Patients demographic characteristics presented in Table 1.
It was insignificantly differences between three groups in age, weight and gender. No meaningful differences were observed in serum levels of Zn and Cu among the girl or boy cases (Table 2 and 3). The mean of serum Zn levels in children with FC (0.43 ± 0.38 mg/l) were significantly lower than other two groups (Table 4). Also, serum Zn levels in convulsion free febrile children (0.66 ± 0.37 mg/l) had meaningful difference with healthy group patients (0.97 ± 0.15 mg/l). Serum Cu concentrations in three study groups were reported in table 5. Mean serum Cu levels in children with FC and non-FC patients (1.16 ± 0.38 and 1.53 ± 0.76 mg/l, respectively) were significantly higher than healthy children 0.53 ± 0.24 mg/l (p value < 0.05).
Discussion
The results of the present study demonstrated children with fever convulsion had significantly lower serum Zn levels than two other groups (febrile children without 0.001 (2 and 3 groups) 0.001 (1 and 3 groups) (15) who found mean serum Cu level in the control group was significantly lower than that of the case group.
Conclusion
We observed significant lower serum Zn level in children with febrile seizure and meaningful higher serum Cu level than control group cases. There was no significant difference in level of serum Zn and Cu in term of sex.
|
2017-10-26T11:25:47.578Z
|
2016-09-10T00:00:00.000
|
{
"year": 2016,
"sha1": "154a03dc5bc842d64c7d9ec2b0b57eefb7b6d044",
"oa_license": "CCBYNC",
"oa_url": "http://pbr.mazums.ac.ir/files/site1/user_files_88c428/mrrafati-A-10-67-2-98d4fd2.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "3063dd061d7282b4aa63ce4f0d8af811150db0d2",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
196536717
|
pes2o/s2orc
|
v3-fos-license
|
The Effects of Resilience Training on the Self-Efficacy of Patients with Type 2 Diabetes: A Randomized Controlled Clinical Trial
ABSTRACT Background: In view of the effect of self-efficacy on empowerment of patients and the role of resilience in the psychological adjustment and physical health of patients, the present study was conducted to examine the effect of resilience training on the self-efficacy of patients with type 2 diabetes. Methods: This double-blinded controlled clinical trial was carried out on 143 diabetic patients in the diabetes clinic in Shiraz between June 2016 and January 2017. Patients were selected using a simple sampling method and randomly divided into control (n=71) and intervention (n=72) groups. The intervention group received 6 sessions of training workshops on resilience skills. The control group received the routine educational pamphlets. The subjects completed diabetes self-efficacy questionnaire before, immediately after, and one month after completion of the intervention. Data were analyzed using SPSS version 16.0. Repeated measure ANOVA, t-test, and Chi-Square tests were used. P<0.05 was considered statistically significant. Results: Based on the results of the repeated measures ANOVAs, the overall score of self-efficacy was found to be significantly increased in the intervention group. Compared with the control group, the intervention group reported significantly higher levels of self-efficacy immediately after the intervention (P<0.001) and one month later (P<0.001). Conclusion: Training programs in resilience skills improves the self-efficacy of patients with type 2 diabetes. The results of this study support the use of resilience training in diabetics; it provides the health professionals and policymakers with an increased understanding of how to recognize the resilience skills for the improvement of self-efficacy. Trial Registration Number: IRCT2016022726790N1
Torabizadeh C, Asadabadi Poor Z, Shaygan M ijcbnm.sums.ac.ir intrOductiOn As the most common metabolic disease, type 2 diabetes is considered as one of the most important concerns in healthcare in developing and developed countries. 1, 2 The prevalence of the disease is increasing in the world and it can influence people of all ages, genders, ethnicities, and social classes. 3 According to the World Health Organization, education is at the core of diabetes prevention and treatment. 4 Self-efficacy is an important component in improving diabetes self-management skills. 5 Research conducted in this area suggests that self-efficacy in diabetic patients is not satisfactory. 6,7 However, it seems that education can enhance the patients' self-efficacy, and if patients reach desirable levels of it, they will be able to manage their diseases well and prevent complications, thus improving their quality of life. 8 Previous studies reveal that high selfefficacy in diabetic patients is associated with life satisfaction, better adaptation, reduced depression, and proper control of diabetes. 9,10 The concept of self-efficacy has been derived from the social cognitive theory of Bandura. It refers to an individual's beliefs and judgments about his/her own ability to carry out tasks and functions. Self-efficacy means the belief that one can carry out certain activities successfully and expect the good results that will follow. 11 Previous research has investigated effective factors and interventions in improving the self-efficacy of diabetic patients. 10,[12][13][14] A potentially important factor that has received inadequate attention is resilience. Studies have revealed a strong relationship between low levels of resilience and development of diabetes. 15 Resilience involves positive adaptation in response to adverse conditions. People acquire the ability to deal with challenges of family and social life effectively through the process of resilience. 16 When facing adverse events, resilient persons are more likely to reject negative thoughts about themselves or their abilities. Resilience is a broad construct including a combination of positive traits or behaviors that facilitate the successful management of adversity or stressors in a person's life. [16][17][18] This construct has grown over the past decades. However, there is a controversy about the usefulness of this construct in psychology. 19,20 Throughout research on resilience, the operationalization of this construct has considerably varied within the literature. This has been viewed both as a criticism and a positive attribute of resilience studies. Although some studies have argued that variation in defining the key components of resilience has limited the generalization and interpretation of the results, others have believed that some variation in methodology is essential in developing our knowledge of this construct. 21 Despite these inconsistencies in the research on resilience, several studies have suggested factors that have consistently been shown to promote successful coping with overwhelming stressors or shown to be related to mental health in the general population. These factors include selfawareness, 16,22 positive thinking and optimistic outlook, 16,17 good problem-solving skills, 17,19 and stress management. 17,23,24 The positive effects of resilience on some chronic diseases, such as heart disease 25 and joint pains, 26 have been studied and proved; yet, a review of the literature shows that no study has examined the impact of resilience in diabetic patients.
The rate of diabetes is increasing and participation of diabetic patients in self-care is becoming more important. In view of the effect of self-efficacy on empowerment of the patients' condition and the role of resilience in the psychological adjustment and physical health of patients, the present study aimed to examine the effect of resilience training on the self-efficacy of patients with type 2 diabetes. It is predicted that for persons with diabetes, resilience-related characteristics and responses might be important contributors to self-efficacy. The present research was, therefore, conducted to examine the effects of resilience training on the self-efficacy of patients with type 2 diabetes.
The effects of resilience skills on the self-efficacy of diabetic patients IJCBNM July 2019; Vol 7, No 3
Materials and MethOds
This double-blinded controlled clinical trial was conducted in the south of Iran from June 2016 and January 2017. The participants of this study consisted of 143 patients with type 2 diabetes, admitted to the largest center for diabetic patients in Motahari Institute affiliated to Shiraz University of Medical Sciences (SUMS), Shiraz, Iran.
The sample size was calculated using the formula below using α=0.05, β=0.01 and the mean (mean 1 =63, mean 2 =52) and standard deviation (S 1 =14.6, S 2 =12.5) based on the results of a previous study. 27 At least, a 112-subject sample size (56 subjects in each group) was determined for the study. By considering a 30% attrition rate, the final sample size for both groups was about 146 and it was raised to 150 (75 subjects in each group).
The inclusion criteria of the study were being diagnosed with type 2 diabetes by an endocrinologist, being in the age range of 30-80 years, being willing to participate in the research, being literacy, and having a resilience score of less than 52, and a self-efficacy score of less than 134. The exclusion criteria of the study were inability to participate in the training program due to the severity of the disease or hospitalization, mental disorders and mental retardation, having graduated in a field related to medical sciences, absence in more than two training sessions, and participation in similar workshops.
Overall, 162 patients were assessed for eligibility. The subjects were selected based on the simple sampling method (selected from a random number table) among the records of all the diabetic patients available at the diabetes center. The patients interested in participating in the research gave their written informed consent to complete the resilience and self-efficacy questionnaires. The individuals who obtained a resilience score of higher than 52 and a self-efficacy score of higher than 134 were excluded from the study (9 patients). Moreover, 3 patients were excluded from the study due to their lack of willingness to participate in the study. The remaining patients (150) were randomly divided into control (n=75) and intervention (n=75) groups using the software Random Allocation and randomized blocking with a random sequence of 25 sextuple blocks. During the study, 4 patients in the control group were excluded due to hospitalization and 3 others in the intervention group due to lack of participation in the sessions ( Figure 1).
The outcome measures of the study consisted of demographic information, resilience, and self-efficacy. In addition to the socio-demographic assessment of age, gender, marital status, education level, employment, duration of being affected with type 2 diabetes, the following variables were measured: Resilience of the subjects was measured using the Conner and Davidson Resilience Questionnaire. The questionnaire was designed in America in 2003. This tool can differentiate resilient people from nonresilient ones in clinical and non-clinical groups. The questionnaire contains 25 items, each rated on a 5-point scale, with higher scores reflecting greater resilience. The scores of the questionnaire ranged from 0 to 100. Internal consistency (Cronbach's alpha) for the full scale was 0.89. Test-retest reliability demonstrated a high level of agreement, with an intra-class correlation coefficient of 0.87. An assessment of the construct validity of the questionnaire using factor analysis yielded five factors. Moreover, its convergent and divergent validity was assessed in various groups. 28 This tool has been translated into Persian and its validity and reliability have been confirmed. An exploratory factor analysis showed the values of factor loading Torabizadeh C, Asadabadi Poor Z, Shaygan M ijcbnm.sums.ac.ir of the items were significant. The reliability of the instrument was assessed in terms of its internal homogeneity and the Cronbach's alpha of the entire instrument was found to be 0.87. 29 In order to evaluate the self-efficacy of the diabetic patients, Diabetes Management Self-Efficacy Scale (DMSES), developed by Bijl et al. (1999), was used. This scale assesses the self-efficacy and ability of diabetic patients in various dimensions, including dietary adherence, level of physical activity, and blood glucose. It is composed of 20 questions, scored on an 11-point Likert scale. 30 The scores of this tool range between 0 and 200; based on their score, people are divided into three groups: high self-efficacy (134-200), moderate self-efficiency (66-133) and low self-efficiency (0-65). 31 The validity of the Persian version of the questionnaire has been examined in a study conducted by Noroozi and Tahmasebi's study conducted in 2014. The original English language version of the questionnaire was translated into Persian using a forward-backward translation method. The validity of questionnaire was assessed through content validity ratio (score of 0.80 or higher) and factor analysis. The rotation matrix of the indices yielded 5 factors. The Kaiser-Meyer-Olkin measure of sampling adequacy was 0.88 and Bartlett's test of sphericity was significant (χ2=2914.2, df=190, P=0.001). To test the reliability, we evaluated internal consistency by Cronbach's alpha (α=0.92).The construct validity of the instrument was determined using factor analysis and criterion validity. Criterionrelated validity showed that the DMSES was a significant predictor of the diabetes selfmanagement (R=0.61; P<0.001). 32 Patients in the control group received the routine educational pamphlets, including education on the prevention of diabetic foot, exercising, nutrition, and blood glucose control, while the patients in the intervention group received resilience skills trainings. The intervention was designed based on the patients' needs and the existing literature in this field. 16,17,19,23,24 The educational intervention consisted of six 4-hour sessions held over six weeks. The patients in the intervention group were divided into smaller groups of 15 to 17 members by the researcher to hold the training workshops.
The beginning of the first session was practically a needs-assessment session toward a better organizing of the interventions. After the assessment of the subjects' needs, each educational session was designed to include a variety of educational techniques intended to enhance the participant's learning and keep their attention (for example, visual aids, such as charts, film presentation, and Microsoft Power Point slideshows). Each session started with a lecture given by a psychiatric nurse. Then, the discussion and group training were performed, and a time was assigned for questioning and answering. The location of the training workshops was the conference hall of the center for diabetes patients. The training was provided by one of the psychiatric nurses. The goals and content of each of the six sessions are summarized in Table 1.
To encourage the patients to participate in the study, a free glucose test was given to all the patients, and at the end of the study, some free glucometers were given to some of the participants by lot. At the end of the last session and one month after the end of the intervention, the self-efficacy questionnaire was completed again by the two groups. Patients in the control group received the educational pamphlets on resilience skills at the end of the study. In the current study, the researcher assistant, who had no knowledge of the types of intervention collected the data, and the statistician who analyzed the data were blind to the study groups.
The study was approved by Research Ethics Committee of Shiraz University of Medical Sciences (code: 1394.7635). Before the intervention, all the patients were informed of the objectives of the study, confidentiality of their information, and signed the informed consent form. The patients were also informed that they were free to withdraw at any point of the research and the time and place of the intervention were set by their agreement.
SPSS v. 16 was used for the statistical analysis of the collected data. In the beginning, compliance test for normal distribution with Kolmogorov-Smirnov test was applied. Student's t-test and Chisquare test were employed to investigate the differences between the two groups regarding demographic and clinical variables. Repeated measure analyses of variance were used to determine whether the improvements in the variable (self-efficacy) changed over time. The significance level was set at P<0.05.
results
Overall, 143 patients remained in the study. The Kolmogorov-Smirnov test showed a normal distribution of quantitative variables, namely age and self-efficacy. The results of the analysis of the demographic data revealed that the majority of the participants in both groups were female (84; 58.7%). The mean age of the patients in the intervention and control groups was 56.18±11.32 years and 57.59±11.27 years, respectively. Most of the participants in both groups were married (108; 75.5%), and had secondary level of education (54; 37.8%). In terms of the duration of the disease, most of the patients were in the range of 3 to10 months (91; 63.6%). According to the results of Chi-square tests, no significant difference was found between the two groups in terms of demographic characteristics (P>0.05).
According to the results of independent t-tests, patients of the intervention and control groups were homogeneous in terms of their self-efficacy scores at baseline (P=0.05). However, immediately and one month after the intervention, there were significant differences between patients in the two groups regarding self-efficacy scores (P<0.001 for both times) ( Table 2). Two-way repeated measures ANOVAs revealed that treatment was a significant factor in ratings of selfefficacy (P<0.001). This means that, regardless Torabizadeh C, Asadabadi Poor Z, Shaygan M ijcbnm.sums.ac.ir of the effect of time, there were significant differences between the groups regarding marginal means of self-efficacy. Time was also found to be a significant factor in ratings of self-efficacy (P<0.001). The results of the repeated measures ANOVA showed significant interactions (treatment×time) for self-efficacy (P<0.001), indicating greater increase in the self-efficacy of the intervention group compared with the control group (Table 3).
discussiOn
The results of the study revealed for the first time that training in resilience skills increases and improves the self-efficacy of type 2 diabetic patients. Findings of this study highlight the importance of measuring resilience in order to develop individual self-efficacy in diabetes populations.
Diabetes is very sensitive to stress effects. Stress in many diabetic patients disrupts the blood glucose control process. Research has revealed that poor control of diabetes and Acquisition of the selfawareness skill Expressing the importance of self-awareness skills in the people life Investigating the effective factors in achieving self-awareness, and barriers achieve self-awareness Questioning and answering, group discussion 3 Problem-solving skill Overview of previous sessions Stating the need for problem-solving skills in dealing with problems, application of problem-solving skills in life, stages of problem-solving technique Questioning and answering and group discussion Group training 4 Anger control skill Overview of previous sessions Definition of anger, its effective factors, symptoms and effects of anger on health and life of people, ways to control and to cope with anger Relaxation techniques and the way to cope with anger Questioning and answering and group discussion Group training 5 Coping with stress skill Overview of previous sessions Definition of stress and its causes, the effect of stress on health of people and the way to cope with stressful situations, techniques of stress management (deep breathing, meditation, mental imagery, muscle relaxation Questioning and answering and group discussion Group training 6 Positive thinking and optimism skill Overview of previous sessions Stating the importance of positive thinking skills and optimism in life, training the positive thinking and discovering positive traits, focusing on strengths Learning techniques to replace rational and positive thoughts instead of irrational and negative thoughts. Questioning and answering and group discussion The effects of resilience skills on the self-efficacy of diabetic patients IJCBNM July 2019; Vol 7, No 3 stressful events are positively correlated. 25 One of the basic skills that can help a person in stressful situations is resilience. Despite the potential benefits of the interventions that could improve well-being and reduce stress in type 2 diabetic patients, there have been few studies of positive psychological interventions in this population. In a study, the researchers found a strong relationship between development of diabetes and increased stress on one hand and low and moderate levels of resilience on the other. 15 However, the operationalization of the construct of resilience has considerably varied within the literature. For example, in a Figure 2: Marginal means of before intervention, immediately after and one month after the intervention in the study groups study, the researchers trained their adolescent subjects in positive emotions, realistic optimism, and cognitive flexibility in order to increase resilience skills in them, and they found that training in resilience skills led to increased self-esteem and reduced violence. 33 Findings of another study demonstrated the effectiveness of psychosocial resilience training for the cardiac health. In the abovementioned study, the researchers examined the effects of teaching positive emotions skills, cognitive flexibility, social support, life meaning, active coping, and therapy strategies such as relaxation training and social support building on increasing resilience in patients with heart diseases. 34 In the same line, findings of another study showed that resiliencebased diabetes self-management education improved psychological and physiological health in patients with type 2 diabetes. 35 In the present research, based on the needs of the patients and findings of the studies conducted in this area, 16,17,19,23,24 self-awareness skills, problem solving, anger control, coping with stress, positive thinking skills and optimism were used for diabetic patients. There is no consensus on the main components of resilience, but the significant findings of the present research showed that the skills taught led to increased resilience in patients. Findings of this study suggest the importance of including routine use of resilience skills in the management of type 2 diabetic patients.
The results of the study showed a significant increase in the self-efficacy scores of the intervention group in the post-test, which were reflected in large effect sizes. The results also remained stable at 1-month follow-up. Overall, in resilience skill training, patients learn to use coping strategies more and it might enhance their self-efficacy. This illustrates that the skills taught in the present study can be especially effective in increasing resilience in patients. Thus, the findings of the study can expand our horizons about the concept of resilience.
The present findings lend support to the notion that training in resilience skills increases self-efficacy in diabetic patients. This fact can be used to improve the control of disease in people with diabetes. According to a study, resiliency training approach in people with type 2 diabetes improved their physical health status. 36 This is in line with the results of the present study and confirms the positive influence of resilience skills on self-efficacy.
The findings of the present study are also in agreement with the results of another study that showed that higher self-efficacy was correlated with better self-care behavior. 37,38 Overall, considering the findings of the present research and those of the previous studies, 39 one can claim that resilience training can enhance the self-efficacy in patients with type 2 diabetes mellitus.
One limitation of the current research was the smaller number of males compared to females, though the groups were homogeneous in terms of gender. Another limitation of the study was not investigating the effect of training in resilience skills on disease control in patients, for example, investigating their blood glucose index. It is recommended that future studies should be conducted with larger sample sizes and longer follow-up periods.
cOnclusiOn
As the prevalence of type 2 diabetes follows an increasing trend which imposes a higher economic burden on the community, and considering the low cost of the method used in the study, it is suggested that health policymakers should employ the tested method in health programs. According to the findings of the present study, providing short-term group training in resistance skills can prove useful. This study supports the use of resilience strategies for diabetic population; it provides health professionals and policymakers with an increased understanding of how to recognize and foster resilience skills for the improvement of self-efficacy. Further studies are needed to confirm the long-term effects of this resiliencebased educational intervention.
|
2019-07-05T01:27:39.063Z
|
2019-07-01T00:00:00.000
|
{
"year": 2019,
"sha1": "531b632d99c1d18086c7d80701e14f589bb541e5",
"oa_license": "CCBYNCSA",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "531b632d99c1d18086c7d80701e14f589bb541e5",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
124253404
|
pes2o/s2orc
|
v3-fos-license
|
Seasonal Variations of Atmospheric Pollution and Air Quality in Beijing
New ambient air quality standards were released in 2012 and implemented in 2013 with real time monitoring data publication of six atmospheric pollutants: particulate matter (PM)2.5, PM10, O3, SO2, NO2 and CO. According to the new standards, Beijing began to publicize real-time monitoring data of 35 monitoring stations in 2013. In this study, real time concentrations of all six atmospheric pollutants of all 35 monitoring stations were collected from September 2014 to August 2015 to investigate the spatial and temporal pattern of the air quality and atmospheric pollutants. By comparing the annual and seasonal variations of all six pollutants' concentrations, it was found that particulate matter, especially PM2.5, is still the major contributor to the deterioration of air quality in Beijing. Although the NO2 and O3 concentrations of some stations were still high under certain circumstances, their contributions to air quality index (AQI) were not comparable to those of PM2.5 and PM10. SO2 and CO concentrations have dropped to well below the qualification standards. Winter and autumn were the most polluted seasons for all pollutants except O3, whose concentrations are higher in summer. South and southeast stations were the most polluted compared with the rest of the stations, especially for particulate matter. Wind profile analysis with heavy pollution situations indicates that low speed southwest or east wind situations have the higher possibility of heavy pollution, suggesting that it is highly possible that long-range transportation of air pollutants from south or east neighboring provinces played an important role in the worsening air conditions in Beijing.
Introduction
Air quality is a major concern for people living in Beijing [1,2], especially after several serious haze-fog events since 2011 [3,4]. However, the air quality evaluation standards released in 1996 (NAAQS-1996), did not take into account PM2.5 (Particulate Matter with aerodynamic diameter less than 2.5 µm) [5]. As a result, the air quality attainment rate of Beijing under NAAQS-1996 (Air Pollution Index, API < 100) has always been greater than 70% since 2008 [6]. This is inconsistent with public awareness since people are suffering by the worsening air pollution [7,8].
In China, air quality monitoring started from the mid-1980s while the release of daily air pollution levels began from 2000 under NAAQS-1996 by taking into account the daily average concentrations of PM10, SO2 and NO2. Twelve years of daily monitoring data provided a effective data source for the analysis of air pollution level of major Chinese cities [9]. However, a series of studies suggested that PM2.5 in major northern Chinese cities had been an important issue, as although the air pollution index (API) under NAAQS-1996 suggested that air quality was fine, the atmospheric conditions were poor [10][11][12][13][14][15]. In February 2012, China then released new air quality standards (NAAQS-2012) by taking into account six air pollutants: PM2.5, PM10, SO2, O3, NO2 and CO [16]. From January 2013, Beijing adopted NAAQS-2012 to publicize real time air monitoring data at one hour intervals along with 73 other cities by deploying 35 permanent air quality monitoring stations. These monitoring data provided unique tools to analyze the present atmospheric pollution levels for Beijing, the capital of China.
In this paper, we collected hourly air quality data of all 35 air quality monitoring stations in Beijing from September 2014 to August 2015 to give a comprehensive analysis of the air quality evaluations under the new national ambient air quality standards.
Data and Methods
From January 2013, Beijing began to publicize real time air quality monitoring data under National Ambient Air Quality Standards released in 2012 (NAAQS-2012) at one hour intervals on a municipal web platform [17]. In this paper, hourly monitoring concentrations of all six pollutants (PM2.5, PM10, O3, SO2, NO2 and CO) from September 2014 to August 2015 were collected through deploying a web download program. In total, more than 300,000 hourly monitoring data were collected and analyzed in this paper. The 35 monitoring stations were divided into six groups: 12 urban stations, 11 suburban stations (four in north and seven in the south of Beijing), seven background stations (four in the north and three in the south) and five traffic monitoring stations, shown in Figure 1 and Table 1. Although the real-time web platform publicizes current monitoring data, the history data two weeks ahead are not provided publicly. Therefore, our web deployed program accessed to the web platform at an interval of half an hour to download the real-time data. Due to equipment failure or internet transfer error, some data were missing during the collection. According to NAAQS-2012, daily observations for at least 20 h were supposed to be valid, while the rest were abandoned to ensure the representativeness of the data. The 12 urban stations were deployed in the urban areas of Beijing, while the 11 suburban stations were located at the downtown areas of suburban counties in Beijing. The seven background stations were placed at those areas far away from human activities: Station 24 is the background for urban areas; Stations 25 and 26 are for monitoring the transportation from north and northwest directions; Stations 27 and 28 in the northeast and southeast mainly monitored the transportation from nearby Tianjin; Stations 29 and 30 in the south and southwest were for monitoring the data from the heavily polluted Hebei Province. Different to the previous 30 stations, the five traffic monitoring stations were placed just next to the main road as suggested in Table 1, while the other 30 stations were 150 m away from the main road according to NAAQS-2012. 50 40 35 160 2 100 150 150 80 75 200 4 150 250 475 180 115 300 14 200 350 800 280 150 400 24 300 420 1600 565 250 800 36 400 500 2100 750 350 1000 48 500 600 2620 940 500 1200 60 The AQI is an index that indicates the pollution level of the atmosphere, ranging from 0 to 500. The higher the AQI value is, the heavier the atmospheric pollution is. According to NAAQS-2012, PM2.5, PM10, ozone (O3), sulfur dioxide (SO2), nitrogen dioxide (NO2) and carbon monoxide (CO) were included into the calculation of the AQI. The first step in calculating the AQI is to calculate the IAQI (individual air quality index) for each pollutant. The IAQI of each pollutant mentioned above is calculated as follows: where IAQI is the individual air quality index for pollutant P (PM2.5, PM10, O3, SO2, NO2 and CO), and C is the daily mean concentration of pollutant P. BP and BP are the nearby high and low values of C as shown in Table 2. IAQI , and IAQI are the individual air quality indexes in terms of BP and BP as shown in Table 2. The largest IAQI value is 500, and once the air pollutant's concentration exceeds the highest limit in Table 2, the IAQI will be set to 500. After the calculation of each IAQI , the AQI is then calculated by choosing the max IAQI as follows: Equation (2) suggests that the AQI is not the sum contribution of all of the air pollutants but rather the maximum value of the IAQI. The air pollutant with a maximum IAQI when the AQI is larger than 50 is designated as the Primary Pollutant. Daily AQI less than 100 is supposed to be qualified according to NAAQS-2012.
Air Quality Attainment Rate under the New National Ambient Air Quality Standards
The annual averaged AQI for all 35 stations is shown in Figure 2. A distinguished spatial pattern of AQI could be observed from Figure 2: south and southeast stations (both suburban and background) as well as three traffic monitoring stations in the south were those having the worst air qualities (annual AQI > 120). Urban stations and two north traffic monitoring stations followed with annual AQI in the range of 100 to 120, which is still exceeding the attainment level of 100. Suburban stations and north background stations in the north of Beijing had the best air qualities of AQI less than 100. The four remote north stations even had annual AQIs less than 90. The attainment rates (rates of AQI < 100 or API < 100) for the 35 monitoring stations were calculated and grouped into the six categories, as shown in Figure 3. It could be determined that the air quality attainment rates have all fallen by 15%-22% under NAAQS-2012 with south background stations having the largest attainment rate decrease. Under both standards, north background stations were those having the highest air quality attainment rates (88% for NAAQS-1996 and 73% for NAAQS-2012) while south background stations had the lowest air quality attainment rates (69% for NAAQS-1996 and 47% for NAAQS-2012). These results suggested that the shift from NAAQS-1996 to NAAQS-2012 have had greatly decreased the air quality attainment rates for all stations by stricter standards but the spatial pattern remained similar: north background had the best air qualities while south background had the worst ones.
Annual Variations of Air Quality and Atmospheric Pollutants
The annual mean concentrations of the five pollutants and mean daily maximum O3 concentrations for the 35 monitoring stations were shown in Figure 4. The spatial patterns of PM2.5 and PM10 are similar to that of AQI: South and southeast stations were the most polluted while the north stations had the lower PM2.5 and PM10 concentrations. For O3, the situation is different and no obvious spatial pattern could be obtained except that the O3 concentrations for all five traffic monitoring stations were all significantly lower than their nearby stations. For SO2, the spatial pattern is similar to that of PM10 as shown in Figure 4d. Although the southeast stations had the higher concentrations of SO2, they were all far less than 50 µg/m 3 , with IAQI less than 50, suggesting that SO2 has not been the major pollutants. NO2, a kind of vehicle exhaust, has a different spatial pattern that traffic monitoring stations and those stations near the main ring roads have had higher concentrations. For the rest stations, north stations have obvious lower concentrations of NO2 than those of south stations. The pattern of CO is similar to that of PM10: stations at the urban, south and southeast had higher concentrations of CO than those north stations. However, the annual average CO of all 35 stations were less than 2 mg/m 3 , with IAQI of CO less than 50 according to Table 2. The spatial distribution of the six pollutants suggested that annual O3, SO2 and CO concentrations were all qualified according to NAAQS-2012. PM2.5, PM10 and NO2, were the major pollution pollutants as some monitoring stations' annual concentrations of these three pollutants exceeded the level of attainment (IAQI > 100).
Seasonal Variations of AQI
The air quality of Beijing shows an obvious seasonal pattern as in Figure 5. Summer had the best air quality with nearly all monitoring stations' average AQI less than 100. Spring followed with most stations average AQI in the range of 90-120. Air quality in winter deteriorated as most urban and south stations' air quality rose to larger than 120 with four even greater than 150. Winter's air quality had become even worse with southeast stations' air quality rising to more than 180.
Seasonal Variations of PM2.5
The seasonal average PM2.5 concentrations of all monitoring stations were less than 80 μg/m 3 , three suburban stations in the north, one suburban station in the west and two background stations in the north even had seasonal average PM2.5 concentrations less than 50 μg/m 3 as shown in Figure 6a,b. PM2.5 concentrations increased sharply in autumn: most stations' PM2.5 concentrations increased to more than 80 μg/m 3 except six stations in the north as shown in Figure 6c. Stations in the south and southeast even had seasonal average PM2.5 concentrations in the range of 110-140 μg/m 3 . In the winter, PM2.5 concentrations in the north background stations and suburban stations decreased while those in southeast and south background stations increased with three stations PM2.5 concentrations greater than 140 μg/m 3 .
Seasonal Variations of PM10
Most stations' PM10 concentrations in summer were less than 100 µg/m 3 except two traffic monitoring stations, as shown in Figure 7b. South stations, urban stations and traffic monitoring stations tended to have higher PM10 concentrations than north stations. Spring had higher average PM10 concentrations than summer with most stations' average PM10 concentrations greater than 100 µg/m 3 . Autumn and winter, shown in Figure 7c,d had higher PM10 concentrations than those of summer and spring, with most urban and south stations' average PM10 concentrations higher than 140 µg/m 3 . By comparing winter and autumn, autumn was more seriously polluted by PM10.
Seasonal Variations of O3
Seasonal variations of O3 were different to those of PM2.5 and PM10. Summer was the most polluted by O3 with most stations' averaged maximum daily O3 concentrations higher than 150 µg/m 3 . Only four traffic monitoring stations' averaged maximum daily O3 concentrations lower than 120 µg/m 3 shown in Figure 8b. Spring's averaged maximum daily O3 concentrations had fallen to less than 120 µg/m 3 for most stations. Traffic monitoring stations and two stations in the west had the lowest O3 concentrations as shown in Figure 8a. The O3 concentrations dropped significantly to less than 90 µg/m 3 for autumn and even 60 µg/m 3 for winter. Figure 9 shows the seasonal variations of SO2 concentrations for the 35 monitoring stations in Beijing. Concentrations of SO2 in spring, summer and autumn were all very low to less than 20 µg/m 3 . Spring and autumn had comparable SO2 concentrations while summer had the lowest SO2 concentrations with most stations' SO2 concentrations lower than 10 µg/m 3 except the five traffic monitoring stations whose average SO2 concentrations higher than 40 µg/m 3 . Winter time was the most SO2 polluted season with most stations' SO2 concentrations higher than 20 µg/m 3 which is consistent with the heating period in Beijing during the winter time with large amount of coal combustion. The south and southeast stations were affected most by SO2 pollution with average SO2 concentrations higher than 30 or even 40 µg/m 3 .
Seasonal Variations of NO2
Autumn and winter had higher NO2 concentrations than those of spring and summer as shown in Figure 10. In autumn and winter, NO2 concentrations were higher in south stations and traffic monitoring stations than the rest stations in the north. In spring and summer, most stations' NO2 concentrations were less than 40 µg/m 3 except the traffic monitoring stations and stations. It is obvious that stations in the south were more polluted by NO2 than those in the north as the southern parts of Beijing were more affected by the traffic.
Seasonal Variations of CO
CO concentrations in spring and summer were very low to less than 1.0 mg/m 3 or even 0.8 mg/m 3 as shown in Figure 11. CO concentrations began to rise from winter to more than 1.2 mg/m 3 for most monitoring stations in south area shown in Figure 11c. Winter was the most CO polluted season with most monitoring stations having average CO concentrations higher than 1.6 mg/m 3 except seven stations in the north. The Maximum average CO concentration even reached 4.1 mg/m 3 , suggesting that the heating period in winter brought a lot of CO pollution for Beijing.
Discussion
The real-time release of atmospheric pollution concentrations and air qualities for 35 permanent monitoring stations on the web-platform provided a powerful data source for governments and researchers to investigate the air pollution situations and make policies to deal with air pollutions. New ambient air quality standard has brought PM2.5, O3 and CO into the air quality monitoring system. First of all, the introduction of PM2.5 in the standard had brought down the air quality attainment rate by 15% to 22% for all monitoring stations, which is really a great problem for the evaluation of the work of local government as the air quality attainment rates have dropped to less than 70% for most monitoring stations.
Different to the daily report of API without concentration of each pollutant under NAAQS-1996, Beijing municipal government provided real-time concentrations of all six pollutants to the public. These public data, although sometimes with some missing data due to internet transfer or data error, could help researchers to insightfully investigate the atmospheric pollution situations. As the most frequent primary pollutants, PM2.5 and PM10 shew similar spatial and seasonal patterns that south and southeast stations were polluted most during autumn and winter while the pollutions in summer and spring were reduced to a fairly good level. Except O3, the rest of the five pollutants showed similar temporal patterns that summer had the lowest pollutants concentrations while autumn and winter were the most polluted. O3 concentrations tend to increase in humid summer time with strong solar radiations under strong photo-chemical reactions in the atmosphere. In dry winter and autumn, O3 concentrations dropped greatly. Additionally, O3 concentrations are higher in north stations without traffic activities, which is the major contribution to the NO2 in the atmosphere. The high concentrations of nitrogen oxides in the regions with heavy traffic monitoring have greatly reduced the concentrations of O3 by the chemical reaction process , while for those stations far away from traffic (mainly in north stations), the O3 concentrations are higher with less concentrations of nitrogen monoxide as a catalyst. SO2, NO2 and CO concentrations' spatial and temporal distributions were similar as south and south stations in winter were most serious polluted while spring and summer were less polluted. As a kind of vehicle exhaust, NO2 concentrations of traffic monitoring stations just next to main streets were higher than other stations, demonstrating the impact of traffic. SO2 and CO, usually considered to be products of coal combustion, were higher in the winter heating season and probably transported from the south neighboring province. Table 2 shows the current operating air quality stand with its limits. By comparing the limit with qualified level (IAQI < 100), we would find that under most situations, the concentrations of CO and SO2 fall in the range of IAQI < 100 or even 50, suggesting that CO and SO2, although included in the air quality monitoring system, were not the major reason for the air pollution. NO2 and O3, however, sometimes would exceed the daily limits (concentrations when IAQI = 100). Particulate matter, especially PM2.5, was the frequent pollutant over the limit for south and southeast stations during winter and autumn. These results demonstrated that under the new ambient air quality standard, PM2.5 is the major reason for the deterioration of air quality in Beijing. The high concentrations of PM2.5 in south and southeast stations also suggested that except for a local source, the transportation of south neighboring province could not be neglected.
Wind fields play an important role in the dispersion of atmospheric pollutants, especially under the conditions of heavy pollution. In this paper, we selected the top 15% concentrations of all pollutants for one selected station (The situations of the rest stations are similar). The top 15% concentrations of the six pollutants with their associations with wind speed and wind directions of Station 1 (Dongsi, an urban station) are shown in Figure 12. The results suggest that the top 15% concentrations of all pollutants (high heavy pollutant concentrations) mainly occurred under the situation of low wind speed (less than 1.5 m/s) except for O3. For PM2.5, south and east low wind speed situations have the highest possibility of high PM2.5 concentrations. For PM10, the situation is similar except that some large northwest wind speed may bring high PM10 concentrations indicating that the dust source of PM10. For O3, SO2, NO2 and CO, high pollutants' concentrations mainly occurred under southwest and east low speed wind conditions. These results suggested that low wind speed, with southwest or east wind situations have the most frequent heavy polluted occurrences in Beijing.
Conclusions
Beijing, the capital of China, has endured serious air pollution problems in the past decade. However, the previous air quality standard (NAAQS-1996) suggested that the air quality was fairly good which is inconsistent with residents' feelings and the frequent fog-haze events after 2011. Therefore, China has released new air quality standards (NAAQS-2012) by adding PM2.5, O3 and CO besides PM10, SO2 and NO2 into the air quality standards with real time requests. In this study, we collected the annual monitoring data of all six pollutants from September 2014 to August 2015 to have a comprehensive evaluation of the seasonal and spatial situations of the atmospheric pollution in Beijing.
Temporal distributions of the air pollutants demonstrated that autumn and winter were the season most polluted by PM2.5, PM10, SO2, NO2 and CO while summer was the one having the best air quality. O3 showed an obviously different pattern that summer was polluted most by O3 as the abundant solar radiation, and a humid atmosphere helped the generation of O3 in the afternoon. Spatial distribution of these air pollutants suggested the stations in south and southeast areas were more seriously polluted by PM2.5, PM10, SO2 and CO in winter and autumn, while the atmospheric pollution concentrations in north stations were far lower. This kind of spatial pattern of air pollution demonstrated the transportation from south and southeast directions. Extra wind profile analysis with heavy pollution situations indicates that low speed southwest or east wind situations have the higher possibility of heavy pollution, suggesting that it is highly possible that long-range transportation of air pollutants from south or east sources could not be neglected. NO2 concentrations of traffic monitoring stations were higher. The monitoring results revealed that PM2.5 and PM10 were the major contributors to the air pollution and that NO2 and O3 remained relatively important pollutants but not comparable to particulate matter. SO2, a kind of pollutant that used to be high in concentration at the beginning of 21st century, has become a pollutant with low concentration well under the air quality standard qualification. CO concentrations, with winter higher than the rest of the seasons, were qualified under most circumstances. Therefore, the major challenge for government of Beijing to deal with the serious problems of air pollution is how to reduce the concentrations of PM2.5 both locally and transported a long way from southerly directions.
|
2016-03-01T03:19:46.873Z
|
2015-11-18T00:00:00.000
|
{
"year": 2015,
"sha1": "6749b2d3e1ffe3fde6d71af13c46c1cee866eae6",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2073-4433/6/11/1753/pdf",
"oa_status": "GOLD",
"pdf_src": "Crawler",
"pdf_hash": "6749b2d3e1ffe3fde6d71af13c46c1cee866eae6",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Environmental Science"
]
}
|
219308095
|
pes2o/s2orc
|
v3-fos-license
|
Terminology Finite-State Preprocessing for Computational LFG
This paper presents a technique to deal with multiword nominal terminology in a computational Lexical Functional Grammar. This method treats multiword terms as single tokens by modifying the preprocessing stage of the grammar (tokenization and morphological analysis), which consists of a cascade of two-level finite-state automata (transducers). We present here how we build the transducers to take terminology into account. We tested the method by parsing a small corpus with and without this treatment of multiword terms. The number of parses and parsing time decrease without affecting the relevance of the results. Moreover, the method improves the perspicuity of the analyses .
Introduction
The general issue we are dealing with here is to determine whether there is an advantage to treating multiword expressions as single tokens, by recognizing them before parsing. Possible advantages are the reduction of ambiguity in the parse results, perspicuity in the structure of analyses, and reduction in parsing time. The possible disadvantage is the loss of valid analyses. There is probably no single answer to this issue, as there are many different kinds of multiword expressions. This work follows the integration 1 of (French) fixed multiword expressions like a priori, and time expressions, like le 12janvier 1988, in the preprocessing stage.
Terminology is an interesting kind of multiword expressions because such expressions are almost but not completely fixed, and there is an intuition that you won't loose many good anal-yses by treating them as single tokens. Moreover, terminology can be semi or fully automatically extracted. Our goal in the present paper is to compare efficiency and syntactic coverage of a French LFG grammar on a technical text, with and without terminology recognition in the preprocessing stage. The preprocessing consists mainly in two stages: tokenization and morphological analysis. Both stages are performed by use of finite-state lexical transducers (Kartunhen, 1994). In the following, we describe the insertion of terminology in these finite-state transducers, as well as the consequences of such an insertion on the syntactic analysis, in terms of number of valid analyses produced, parsing time and nature of the results. We are part of a project, which aims at developing LFG grammars, (Bresnan and Kaplan, 1982), in parallel for French, English and German, (Butt et al., To appear). The grammar is developed in a computational environment called XLE (Xerox Linguistic Environment), (Maxwell and Kaplan, 1996), which provides automatic parsing and generation, as well as an interface to the preprocessing tools we are describing.
Terminology Extraction
The first stage of this work was to extract terminology from our corpus. This corpus is a small French technical text of 742 sentences (7000 words). As we have at our disposal parallel aligned English/French texts, we use the English translation to decide when a potential term is actually a term. The terminology we are dealing with is mainly nominal. To perform this extraction task, we use a tagger (Chanod and Tapanainen, 1995) to disambiguate the French text, and then extract the following syntactic patterns, N Prep N, N N, N A, A N, which are good candidates to be terms. These candidates are considered as terms when the corresponding English translation is a unit, or when their translation differs from a word to word translation. For example, we extract the following terms: (1) vitesses rampantes (creepers) boite de vitesse (gearbox) arbre de transmission (drive shaft) tableau de bord (instrument panel) This simple method allowed us to extract a set of 210 terms which are then integrated in the preprocessing stages of the parser, as we are going to explain in the following sections.
We are aware that this semi-automatic process works because of the small size of our corpus. A fully automatic method (Jacquemin, 1997) could be used to extract terminology. But the material extracted was sufficient to perform the experiment of comparison we had in mind.
Grammar Preprocessing
In this section, we present how tokenization and morphological analysis are handled in the system and then how we integrate terminology processing in these two stages.
Tokenization
The tokenization process consists of splitting an input string into tokens, (Grefenstette and Tapanainen, 1994), (Ait-Mokthar. 1997), i.e. determining the word boundaries. If there is one and only one output string the tokenization is said to be deterministic, if there is more than one output string, the tokenization is non deterministic. The tokenizer of our application is non deterministic (Chanod and Tapanainen, 1996), which is valuable for the treatment of some ambiguous input string 2, but in this paper we deal with fixed multiword expressions. The tokenization is performed by applying a two-level finite-state transducer on the input '~tring. For example, applying this transducer on the sentence in 2 gives the following result, the token boundary being the @ sign.
(The tractor is stationary.) Le@tracteur(~est@g@r@arr~t@.@ ~for example bien que in French In this particular case, each word is a token. But several words can be a unit, for example compounds, or multiword expressions. Here are some examples of the desired tokenization, where terms are treated as units: (3) La bore de vitesse est en deux sections.
(This lever engages the drive shaft.) Ce~levier@engage~r@arbre de transmission@.@ We need such an analysis for the terminology extracted from the text. This tokenization is realized in two logical steps. The first step is performed by the basic transducer and splits the sentence in a sequence of single word. Then a second transducer containing a list of multiword expressions is applied. It recognizes these expressions and marks them as units. When more than one expression in the list matches the input, the longest matching expression is marked.
We have included all the terms and their morphological variations in this last transducer, so that they are analyzed as single tokens later on in the process. The problem now is to associate a morphological analysis to these units.
Morphological Analysis
The nmrphological analyzer used during the parsing process, just after the tokenization process, is a two-level finite-state transducer (Chanod, 1994). This lexical transducer links the surface form of a string to its morphological analysis, i.e. its canonical form and some characterizing morphological tags. Some examples are given in 5.
(5) >veut vouloir+IndP+SG+P3+Verb >animaux animal+Masc+PL+Noun animal+Masc+PL+Adj The compound terms have to be integrated into this transducer. This is done by developing a local regular grammar which describes the compound morphological variation, according to the inflectional model proposed in (Kartunnen et el., 1992 tunnen et al., 1992) and (Quint, 1997), and can be easily added to the regular grammar if needed.
A cascade of regular rules is applied on the different parts of the compound to build the morphological analyzer of the whole compound. For example, roue motrice is marked with the diacritic +DPL, for double plural and then, a first rule which just copies the morphological tags from the end to the middle is applied if the diacritic is present in the right context: The composition of these two layers gives us the direct mapping between surface inflected forms and morphological analysis. The same kind of rules are used when only the first part of the compound varies, but in this case the second rule just deletes the tags of the second word. The two morphological analyzers for the two variations are both unioned into the basic morphological analyzer for French we use for morphology. The result is the transducer we use following tokenization and completing input preprocessing. An example of compound analysis is given here: (6) > roues motrices roue motrice+Fem+PL+Noun > r~gimes moteur r~gime moteur+Masc+PL+Noun The morphological analysis developed here for terminology allows multiword terms to be treated as regular nouns within the parsing process. Constraints on agreement remain valid, for example for relative or adjectival attachment.
Parsing with the Grammar
One of the problems one encounters with parsing using a high level grammar is the multiplicity of (valid) analyses one gets as a result. While syntactically correct, some of these analyses should be removed for semantic reasons or in a particular context. One of the challenges is to reduce the parse number, without affecting the relevance of the results and without removing the desired parses. There are several ways to perform such a task, as described for example in (Segond and Copperman, 1997); we show here that finite state preprocessing for compounds is compatible with other possibilities.
Experiment and Results
The experiment reported here is very simple: it consists of parsing the technical corpus before and after integration of the morphological terms in the preprocessing components, using exactly the same grammar rules, and comparing the results obtained. As the compounds are mainly nominal, they will be analyzed just as regular nouns by the grammar rules. For example, if we parse the NP: (7) La boite de vitesse (the gearbox) before integration we get the structures shown in Fig.3, and after integration we get the simple structures shown in Fig.4. The following tables show the results obtained on the whole corpus: The results are straightforward: one observes a significant reduction in the number of parses as well as in the parsing time, and no change at all for sentences which do not contain technical terms. Looking closer at the results shows that the parses ruled out by this method are semantically undesirable. We discuss these res,dts in the next section.
Analysis of Results
The good results we obtained in terms of parse mmfl)er and parsing time reduction were predictable. As the nominal terminology groups nouns, prepositional phrases and adjectival phrases together in lexical units, there is a significant reduction of the number of attachments. can syntactically attach to voyant, levier, and distributcur which leads to 3 analyses. 13,tt in the domain the corpus is concerned with, distributeur hgdraulique is a term. Parsing it as a nominal unit gives only one parse, which is the desired one. Moreover, grouping terms in unit resolves SOlne lexical ambigtfity in the preprocessing stage: for example, in ceinture de s&urit(, the word ceinturc is a noun but may be a verb ill other contexts. Parsing ceinture de sdcuritd as a nominal term avoids further syntactic disa.mbigltation. Of course, one has to be very careful with the terminology integration in order to prevent a loss of valid analyses. In this experiment, no valid analyses were ruled out, because the semiautomatic method we used for extraction and integration allowed us to choose accurate terms. The reduction in the number of attachments is the main source of the decrease in the number of parses.
As the number of attachments and of lexical ambiguities decreases, the number of grammar rules applied to compllte the resldts decreases as well. The parsing time is reduced as a consequence. The gain of efficiency is interesting in this approach, but perhaps more valuable is the perspicuity of the results. For example, in a translation application it is clear that the representation given in Fig. 4, is more relevant and directly exploitable than the one given in Fig. 3, because in this case there is a direct mapping between the semantic predicate in French and English.
Conclusion and possible extensions
The experiment presented in this paper shows the advantage of treating terms as single tokens in the preprocessing stage of a parser. It is an example of interaction between low level finite-state tools and higher level grammars. Its shows the benefit from such' a cooperation for the treatment of terminology and its implication on the syntactic parse results. One can imagine other interactions, for example, to use a "guesser ''a transducer which can easily process unknown words, and give them plausible mophological analyses according to rules about productive endings. There are ambiguity sources other than terminology, but this method of ambiguity reduction is compatible with others, and improves the perspicuity of the results. It has been shown to be valuable for other syntactic phenomena like time expressions, where local regular rules can compute the morphological variation of such expressions. In general, lexicallzation of (fixed) multiword expressions, like complex preposition or adverbial phrases, compounds ' dates, numerals, etc., is valuable for parsing because it avoids creation of"had hoc" and unproductive syntactic rules like ADV .~ N Coord N to parse corps et dine (body and soul), and unusual lexicon entries like fur to get au fur et a mesure (as one goes along). Ambiguity reduction and better relevance of results are direct consequences of such a treatment. This experiment, which has been conducted on a small corpus containing few terms, will be extended with an automatic extraction and integration process on larger scale corpora and other languages.
a Already used in tagging applications 6
|
2014-10-01T00:00:00.000Z
|
2002-01-01T00:00:00.000
|
{
"year": 2002,
"sha1": "a283c9509c021ac3b0ae5f58bf2f1313b21edbe6",
"oa_license": null,
"oa_url": "http://dl.acm.org/ft_gateway.cfm?id=980877&type=pdf",
"oa_status": "BRONZE",
"pdf_src": "ACL",
"pdf_hash": "a283c9509c021ac3b0ae5f58bf2f1313b21edbe6",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": []
}
|
258784180
|
pes2o/s2orc
|
v3-fos-license
|
IAG/USP TEST SITE: A NEAR SURFACE GEOPHYSICS TEACHING AND RESEARCH LABORATORY
. This work shows the construction project of the Geophysical Test Site (Sítio Controlado de Geofísica Rasa, SCGR-I) of the Institute of Astronomy, Geophysics and Atmospheric Sciences (IAG), of the University of São Paulo (USP) and its impact on teaching and research in Geophysics. The IAG/USP test site (SCGR-I) has 1500 m 2 , being characterized by 7 studies lines with 30 m length in the NS direction. Targets, such as metallic pipes and tanks, plastic pipes and tanks, concrete tubes, ceramic pots, among others, with different geometries and physical properties were buried at depths from 0.5 to 2 m in relation to the surface. A metallic guide pipe of 3.8 cm of diameter was buried at the 15 m position along the EW direction, crossing all 7 lines. Targets simulate objects found in archaeological studies, geotechnical and urban planning studies and environmental studies. In this work, comparative analyzes between real and synthetic GPR results on metallic and plastic tanks are shown, as well as EM38 results on metallic tanks. The SCGR-I proved to be an important tool for teaching and research related to the applications of geophysical methods for near surface investigations and could be a motivation to build more test sites.
INTRODUCTION
Since 1993, São Paulo campus of the University of São Paulo (USP) has been used as a applied geophysics laboratory, but only in 1997 the area in front of the Institute of Astronomy, Geophysics and Atmospheric Sciences (IAG/USP) began to be systematically used as a laboratory for practical activities by undergraduate and graduate students in geophysics.
In these systematic surveys, several geophysical methods were used, such as GPR-Ground Penetrating The importance of this test site is that the geophysical signatures of targets whose physical and geometric properties are known can be used as standard responses for each type of material and can be extrapolated to areas where subsurface information is not available.
SCGR-I constitutes an important tool for teaching and research in geophysics, and will be of great importance to our community, consisting of a new underground laboratory. With the installation of the SCGR-I, an important step was taken to improve the knowledge regarding the geophysical responses of targets found in environmental, engineering and archaeology studies.
To illustrate it, numerical modeling results are presented by the FDTD -Finite Differences in Time Domain method, which simulates the responses from the GPR reflections on the metallic tanks installed on Line 4 and the plastic tanks installed on Line 5 of SCGR-I, as well as the GPR results on the same targets.
Additionally, the results obtained with the inductive electromagnetic method using the EM38 equipment on the metallic tanks are also presented.
The present work summarizes the construction project of the IAG/USP Test Site (SCGR-I), shows the comparative results of numerical modeling GPR 2D and real data, EM38 results over metallic tanks and ends with the impacts on teaching and research activities in near surface geophysics. Geological information for the São Paulo basin in the SCGR-I area was obtained through the lithology of three wells for geological and geophysical research that were drilled in the study area (Porsani et al., 2004). (Borges, 2007).
SCGR-I of the IAG/USP: Constructive Project
The constructive project of the SCGR-I is presented, aiming to serve as an inspiration for the construction of Line 2 ( Figure 4) is constituted by brown High Density Polyethylene (PAD -Polietileno de Alta Densidade) pipes with a diameter of 11 cm and 2 m in length. PAD pipes simulate the transport of drinking water to homes, and they are often found in large cities. These pipes are used by the Basic Sanitation Company of the State of São Paulo (SABESP). Figure 5 shows the targets buried in Line 3 which is characterized by concrete tubes of 26, 48 and 70 cm in diameter. The tubes simulate rainwater channeling galleries and sewage drainage.
Line 4 ( Figure 6) is characterized by 200 liter metallic tanks that were arranged both horizontally and vertically, individually and in pairs. All tanks were buried empty to avoid corrosion problems. This line aims to simulate environmental studies, whose goal is the location and determination of their depths. The wave field was simulated using an "exploding reflector" source, in which waves are generated simultaneously from the target and sent to the surface (Yilmaz, 1987;Daniels, 1996). This procedure corresponds to the repositioning of the diffraction hyperbolas in the targets, collapsing the energy to the apex of the hyperbola, being a common procedure in the step of GPR and seismic data migration. It is observed that the top of the metallic tanks is characterized by strong hyperbolic reflections, which was expected, as shown in the numerical modeling result (Figure 10a). Note a hyperbolic reflection at position 19.5 m and arranged at a depth of 1.5 m, highlighted in Figure 10b by an arrow. This reflection, called "artifact", corresponds to a constructive interference of the reflection of the GPR signal between the tank at the 19 m position and the tank at the 20 m position. A detailed discussion of the identification and removal of this artifact through effective processing of the GPR data can be found in Porsani and Sauck (2007). Also note three other hyperbolic reflections ("artifacts") under positions 24, 25 and 25.5 m, being related to voids in the subsoil due to poor soil compaction. These three anomalies were confirmed by means of auger boreholes.
GPR Profiles
The guide metal pipe arranged at 15 m position and at 0.5 m depth is characterized by a tighter hyperbolic reflection. Note that from 2.5 m depth, the GPR signal is attenuated due to the conductive characteristics of the sediments of the São Paulo basin (Porsani et al., 2004). Figure 11 shows the comparison between the results of the 2D GPR numerical modeling and the 200 MHz GPR profile on line 5 of the SCGR-I consisting of plastic tanks. Figure 11a shows the results of the GPR numerical modeling for 150 MHz. It is observed that the plastic tanks and the guide metal pipe are characterized by hyperbolic reflections generated at the top of the targets, whose apex indicates their underground positions. It is noted that the tanks filled with water are characterized by reflections generated at the top and bottom. Additionally, it is also observed that the tanks filled with water and brine present reflections with inverted polarity compared to the top of the empty tanks. Figure 11b shows For the tanks filled with water arranged at positions 4, 17 and 23 m, two reflections are observed at different times.
The first reflector characterizes the top and the second reflector is related to the base of the tank. Note also that the reflectors at the top of the tanks present an inversion of polarity in relation to the reflections generated at the top of the empty tanks due to the high impedance contrast between the clayey soil and the water. A more detailed discussion on identifying the polarity change of the GPR signal can be found in Rodrigues and Porsani (2006).
The half-filled tanks with brine arranged at positions 7 and 26 m were characterized by reflections generated at the soil/plastic/air interface. The upper limit of the brine is not detected, due to the overlap of the reflections at the top of the tank (empty part) and the top of the brine, similar to the empty tanks. The base of these tanks is also not determined due to the high electrical conductivity of brine, causing a high attenuation of the GPR signal.
Tanks filled with brine arranged at positions 10, 13 and 29 m are characterized by reflections with reversed signal polarity generated at the top of the targets, similar to tanks filled with water. Note that the base of the tanks is not detected, due to the attenuation of the electromagnetic wave in brine which is very conductive. The guide metal pipe arranged at a 15 m position and at a depth of 0.5 m, served as a reference target for all seven lines of studies installed at SCGR-I. The top of the metallic pipe is characterized by a strong reflection due to the high electrical conductivity of the metal, causing a total reflection of the GPR signal.
EM38 Profile
Geophysical methods have been important in detecting underground objects such as communication cables, pipes and other public infrastructure. Electromagnetic systems operating in the frequency domain have been shown to be suitable and efficient in detecting buried metallic objects, with several examples in studies of unexploded ordnance detection, urban interference and archaeology (Nelson et al., 2007;Qu et al., 2017).
GPR has been successfully used to detect buried objects and interference networks underground.
However, it has limitations to detect objects buried in conductive saturated clayey soils. On the other hand, the electromagnetic frequency domain system (EM38) does not suffer this limitation in the case of metallic objects buried in conductive environments. Therefore, the integrated application of GPR and EM38 can be complementary, improving the ability to detect metallic targets arranged in conductive soils.
In this work, Geonics EM38 equipment was used to detect metallic tanks buried in SCGR-I. A profile of 30 meters in length was acquired with measurements spaced of 1 meter. The equipment allows a maximum theoretical investigation depth of 1.5 meters, being indicated only for mapping shallow targets. The measurements were performed with the coils in the vertical and horizontal positions, so that two theoretical depths were obtained at each station. Data collected at two investigation depths were entered into inversion program which provided a 2D section of the conductivity profile.
For the interpretation of the EM38 profile on the metallic tanks, the inversion program called EM34-2D was used (Monteiro Santos, 2004). This program uses the non-linear inversion algorithm presented in Sasaki (1989). The algorithm uses a smoothness constraint regularized inversion technique for electromagnetic data acquired along profiles. The algorithm corresponds to a modified 1D inversion with 2D smoothness constraints between adjacent 1D models. Thus, it is possible to obtain the answer in terms of the variation of electrical conductivity and real depths for the measurement points that, interpolated, allow the creation of a 2D image. vertically at the depths where the equipment has reach were clearly detected by anomalies of high electrical conductivity values. Tanks that are installed up to 1.5 m deep were detected with good accuracy. This shows that EM38 can be very efficient in detecting buried metallic objects up to 1 meter deep, and its application integrated with GPR is interesting, especially in conductive soils. Note that the guide metal pipe installed at 0.5 m depth was not clearly detected. This fact is due to its small dimensions, i.e., 3.8 cm in diameter, which is below the detection limit of the EM38 equipment. Among the studies published in the literature are: Porsani et al. (2004Porsani et al. ( , 2006Porsani et al. ( , 2010Porsani et al. ( , 2017Porsani et al. ( , 2018; Rodrigues (2004); Lima (2006); Rodrigues and Porsani (2006); Porsani and Sauck (2007)
|
2023-05-19T15:10:36.634Z
|
2022-12-11T00:00:00.000
|
{
"year": 2022,
"sha1": "73c0fa601a93958966f03160ada7dddebb6232da",
"oa_license": "CCBY",
"oa_url": "https://sbgf.org.br/revista/index.php/rbgf/article/download/2190/1673",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "a22e6126f8fbebbe63a3cda88b51c3f013e69fed",
"s2fieldsofstudy": [
"Geology"
],
"extfieldsofstudy": []
}
|
237671879
|
pes2o/s2orc
|
v3-fos-license
|
A Miniaturized High-Gain Flexible Antenna for UAV Applications
A miniaturized high-gain flexible unmanned aerial vehicle (UAV) antenna is presented in this study. The proposed antenna basically comprised of three parts of printed patch in series, etched on dielectric substrate. And, a flexible cable is loaded on the bottom of dielectric substrate. A coplanar waveguide (CPW) with asymmetric ground feeding structure is employed to provide good impedance matching. The surface current can achieve the same phase for the straight-line patch and the flexible cable, through adjusting the dimensions of the meander line patch, which increases radiation gain while maintaining the compact size. As an important merit to be highlighted, the flexible cable can greatly reduce the volume and aerodynamic drag of the antenna. It has a low-profile compact size of 196 × 15 × 0.8mm 3 (excluding flexible cable). The results show that the omnidirectional gain fluctuates within 4.5 ± 0.1dBi in the desired band (902 MHz–928 MHz), which is high enough for the UAV application. Details of the antenna design and experimental results are presented and discussed.
Introduction
In the recent years, the applications of unmanned aerial vehicles (UAVs) have attracted lots of attention in the area of communication, military, and commercial market. ey are widely used for exploration, reconnoiter, and multimedia communication. It is necessary to have reliable communication link between ground control unit and an aircraft for transmitting telemetry. Unfortunately, there are some restrictions for UAVs' antennas. Firstly, the fly time of drone is quite limited due to the lithium battery life. erefore, the larger the physical volume of antennas, the huger the space occupied on the UAVs, the greater the wind drag, and the shorter the fly time of UAVs. Secondly, the ratio of flight distance and height is large for the UAVs, which determines that the radiation pattern of the antenna must be omnidirectional in the horizontal plane [1][2][3][4][5]. In view of these factors, antennas are required to have small effective area, slight weight, and omnidirectional radiation.
In typical applications, the images and data transmission antennas of UAVs need to meet transmission distance more than 10 kilometer, whereas the data transmission power is 1 watt and its reception power is 500 milliwatts. Nowadays, high-quality and high-speed transmission of data and images' systems with compactness is extremely important in UAVs' applications [6]. For improving the radiation characteristics, some metamaterial circularly polarized antennas with various geometries are presented in [7][8][9], which have high gain and compact size. However, low costs, vertical polarization, and omnidirectional radiation coverage of monopoles in the azimuth plane make them very attractive for UAV applications. So, it is worth trying to combine the metamaterials with monopole antennas. And, a variety of antennas have been used on the UAVs to achieve the required radiation pattern. e authors of [10][11][12][13][14][15] designed antennas that can achieve the omnidirectional radiation pattern by using printed monopole or dipole. A horizontally polarized omnidirectional loop antenna using the segmented line was proposed in [16]; over the frequency range of 2.35-2.55 GHz, the reflection coefficient is less than −10 dB, but the maximum gain is only up to 2 dBi. As reported in [17], although segment loop antenna has compact size and omnidirectional radiation at 956 MHz, it has low gain. Weiyu et al. [18] designed a dual band and low-profile antenna with omnidirectional pattern in the horizontal plane. In [19], a 0.5-2 GHz (VSWR <3 : 1) blade monopole antenna was presented. However, this kind of antenna needs a huge ground plane, leading to a large overall size. Roughly speaking, monopole or dipole antennas have been widely used in wireless communication systems because of simple structures, low costs, and omnidirectional radiation pattern, which also offer advantages in UAVs' applications. However, most of them hardly achieve the small dimensions at low frequencies, which make them difficult for integration in UAVs. To improve the performance of UAVs, the miniaturized monopole antennas with high gains are highly desirable.
In this work, a miniaturized high-gain flexible monopole antenna for UAVs' application is presented. e radiating section, formed by a straight-line patch, a meanderline patch and a flexible cable, and a compact feeding network, is designed carefully to provide high-gain omnidirectional radiation. e novelty of this design is that a flexible cable, which has small cross-sectional area and low-loss material, replaces part of the radiating patch, therefore reducing aerodynamic drag. e use of low-loss, flexible, and lightweight design is also an advantage of this antenna. As an important advantage to be highlighted, its maximum gain is up to 4.56 dBi, which is about 2.2 dBi higher than traditional dipole antennas, and the flying distance can be achieved about 1.2 times longer.
Antenna Design and Analysis
As the successive half wavelength current maxima are in the opposite phase and if the currents were equal, there would be perfect cancellation of the radiation from the oppositely phased pairs of current maxima. However, if all the current maxima were in phase, the radiated fields would add, and a high gain could be achieved. e way of achieving this phase reversal is by inserting an antiresonant network, which uses radiating elements that are a little longer or shorter than one half wavelength. e self-reactance of the longer or shorter dipole is then used in the design of the phasing network between the elements to achieve the desired overall phase shift. In this design, a meander line patch is used to as an antiresonant network, which makes the same phase for the straight-line patch and the flexible cable so that the antenna achieves high-gain radiation. e geometry of the proposed UAV antenna with detailed dimensions is shown in Figure 1.
e various configuration parameters of antenna are as follows: e diameter of the flexible cable is 2 mm, whose material is stainless steel. For versatility, an FR-4 substrate (thickness � 0.8 mm and ε r � 4.4) is adopted.
As seen, the antenna is mainly made up of three parts' patches and a flexible cable (e.g., Part 1, Part 2, and Part 3). Among them, the coplanar waveguide (CPW) feeding structure is displayed in Part 1, comprised of a pair of asymmetric rectangular patches. It is employed to provide good impedance matching, even though it has no radiated effect as a transmission line. A straight-line patch and a meander line, used as part 2 and part 3, respectively, are located on the top of the dielectric substrate. On the bottom of the dielectric substrate, a groove whose two sides are coated with metal is equipped with a flexible cable.
Compared to the microstrip line, the coplanar waveguide (CPW) has lower loss and is more convenient for series connections on the same side of the substrate without via holes. erefore, the overall design adopts asymmetric CPW structure for feeding and forms stepped slots and rectangular slots for coupling, which can improve the impedance matching between the feeding port and antenna. e highfrequency structure simulator (HFSS) is used to simulate the design. A prototype of the proposed UAV antenna is fabricated and assembled.
Particularly, Figure 1(f ) shows that the flexible cable can be bent when the drone lands on the ground, whereas restored to its original shape when taking off. As a consequence, the flexible cable is adopted as the end of the antenna, resulting in little air resistance to the drone, compared with the radiated patch printed on the dielectric substrate. Figure 2 illustrates that the impedance and bandwidth of the proposed antenna can be adjusted by changing the size of the straight-line patch, whereas the resonant frequency of the proposed antenna can be adjusted by changing the dimensions of the meander line. Using W as an example, when its value is 1 mm, the performance of S 11 is not ideal. It has a slight offset towards the low end, but keeping wide impedance bandwidth. e performance of S 11 is the best when its value is 3 mm. However, its impedance bandwidth is very narrow. As the value increases, the impedance bandwidth of the antenna increases gradually. From these plots, it can be observed that the value of W has a large influence on the reflection coefficient.
Likewise, when the length of the meaner-line patch is small, the resonant frequency is towards slightly the high end. As the value of d 2 increases, the resonant frequency of the antenna moves gradually to the low frequency, but its impedance bandwidth and the performance of S 11 are virtually unchanged. It can be seen that the value of d 2 has a large influence on the resonant frequency. A set of optimal values can be obtained by optimization and fine tuning. e simulated current distributions at 915 MHz are shown in Figure 3. Due to the effect of the meander line patch, there are two current zero points. Consequently, the surface currents of the straight-line patch and the flexible cable are in the same phase, which can enhance greatly the antenna omnidirectional gain. With the varying length of the meander line patch, the positions of the current zero points can be changed. When the value of d 2 is 1 mm, one current zero point appears at the straight-line patch near the feed port, while another one appears at the meander line close to the flexible cable. As the value increases, the current zero point at the straight-line patch moves gradually to the meaner line, whereas another one moves gradually to the flexible cable. From the measured results, the proposed antenna can achieve the optimal performance when the value of d 2 is 2 mm.
Result and Discussion
As shown in Figure 4, the performance of antenna can be still optimal when the flexible cable is bent. It can operate well even if there is a slight frequency shift. Moreover, the performance restores immediately to the optimal when it restores to its original shape, thereby reducing effectively the volume of antenna yet guaranteeing the good performance. Figure 5 depicts the measured and simulated S 11 of the proposed UAV antenna with the dimensions given in Figure 1. Because a little copper was added on the substrate for debugging during the test, the measured results are better than the simulated ones. It can achieve VSWR <1.22 : 1 (|S 11 |< −20 dB) at the whole operating bandwidth 902 MHZ ∼ 928 MHz. e measured and simulated gain of the proposed UAV antenna is shown in Figure 6. Because the actual loss is larger than the ideal situation and the measurement error, the measured gain is slightly lower than the simulated gain of the UAV antenna. e result shows that proposed antenna achieves the high gain of 4.56 dBi in the 902 MHz ∼ 928 MHz.
is gain of the proposed antenna can be accepted in practical UAVs applications. e measured radiation efficiency of the proposed UAV antenna are shown in Figure 7, and the result shows that the proposed antenna achieves the high radiation efficiency in the 902 MHz ∼ 928 MHz. e test environment of antenna is shown in Figure 8. e radiation patterns of the fabricated prototype at 915 MHz are shown in Figure 9, in which an omnidirectional radiation pattern with linear polarization can be observed. Although the size of antenna is relatively small, the antenna gain is rather high. It shows that this antenna has good omnidirectional radiation characteristics. erefore, the antenna in the drone can receive signals from any angle on the horizontal plane because of omnidirectional radiation in the horizontal plane. e performance comparison of the proposed antenna to publish referenced works is made in Table 1. It uses the wavelength to measure the size of the PCB. e miniaturization of the antenna is reflected in the use of flexible cables instead of radiating patches as part of the radiation, thereby reducing the length of the dielectric substrate. From Table 1, it could be seen that the proposed antenna provides the higher gain with a relatively reduced size. Furthermore, it has a relatively small size. It could be concluded that the proposed antenna with a reduced size has a good radiation properties and a simple feeding network.
Conclusion
In this work, a miniaturized high-gain flexible UAV antenna, which consists of a straight-line patch, a meander line, and a flexible cable printed on a dielectric substrate, has been proposed, fabricated, and tested. e flexible cable can be bent when the drone lands on the ground, whereas be straight when taking off. As a consequence, it can reduce greatly the dimensions of the whole antenna without compromising the radiating performance. It is found that, by adjusting the dimensions of the straight-line patch, the impedance and bandwidth of the antenna can be controlled effectively. By adjusting the dimensions of the meander line patch, the surface current can be made in the same phase at both the straight-line patch and the flexible cable, which is very helpful to generate an omnidirectional radiation with a highgain level in the operating frequency range. e proposed antenna can cover 902 MHz ∼ 928 MHz with VSWR <1.22 : 1. Moreover, the antenna has a low-profile structure of 341 × 15 × 0.8 mm 3 . Its weight is about 15 g. e proposed antenna is very suitable for UAVs applications.
Data Availability
e data used to support the findings of the study are available from the corresponding author upon request.
Conflicts of Interest
e authors declare that they have no conflicts of interest. International Journal of Antennas and Propagation
|
2021-09-27T20:55:59.372Z
|
2021-07-19T00:00:00.000
|
{
"year": 2021,
"sha1": "bce8411d8993e0ee5e49554d84bfc788ec0cc9a3",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/journals/ijap/2021/9919425.pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "75899e2f7936b84bd83f55c92aaf5ec2d27234d1",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": [
"Materials Science"
]
}
|
261856746
|
pes2o/s2orc
|
v3-fos-license
|
Silver-based plasmonic nanoparticles and their application as biosensor
The silver nanoparticles (AgNPs) exhibit unique and tunable plasmonic properties. The size and shape of these particles can manipulate their localized surface plasmon resonance (LSPR) property and their response to the local environment. The LSPR property of nanoparticles is exploited by their optical, chemical and biological sensing. This is an interdisciplinary area which involves chemistry, biology and materials science. In this paper, a polymer system is used with the optimization technique by blending of two polymers. The two polymer composites polystyrene/poly (4-vinylpyridine) (PS/P4VP) (50:50) & (75:25) were used as found suitable by their previous morphological studies. The results of 50, 95 & 50, 150 nm thickness of silver nano particles deposited on (PS/P4VP) (50:50) & (75:25) were explored to observe their optical sensitivity. The nature of polymer composite embedded with silver nanoparticles affects size of nanoparticle and its distribution in the matrixs. The used polymer composites are found to have uniform distribution of nano particles of various sizes. The optical properties of Ag-nanoparticles embedded in suitable polymer composite for the development of the latest plasmonic applications, owing to its unique properties were explored. The sensing capability of a particular polymer composite is found to depend on the size of nanoparticle embedded into it. The optimum result has been found for silver nano particles of 150 nm thickness deposited on PS/P4VP (75:25).
Introduction
Silver nanocomposites have been of enormous interest due to their commercial applications in medical, electronic, chemical and biochemical sensors and devices.Nanocomposites are multiphase material where at least one of the phases has a dimension in the nanoscale.These new nanomaterials have applications in novel, highly sensitive analytical devices [1] .The new development in the nanotechnology is biosensor-based nanomaterials.These materials have great mark in the improvement of sensitivity of biomolecular detection.The nano biomaterials represent the integration of material science, molecular engineering, chemistry and biotechnology.The nano material sensors are capable of manipulating atoms and molecules for their application as detection devices.The biosensorbased nano materials composite have great potential in applications such as biomolecule recognition, pathogenic diagnosis, and environmental monitoring [2][3][4] .
The polymer composites are used as base for preparing nano biomaterials where silver nanoparticles are embedded into polymer matrices.The polymer is used as particle stabilizers for preventing agglomeration of these particles.In the recent past, silver nanoparticle embedded into polymer composite is very much desirable owing to the applications of silver nanocomposites in various fields such as catalysis, drug delivery, wound dressing, antimicrobial activity, sensors and optical information storage.It is well known that small silver particles embedded in polymer matrix exhibit plasmon resonance absorption and as a result absorption maxima occur in the visible-near infrared region and their spectral position depends on the particle size, shape, filling factor, etc., in the polymer matrix.The surface plasmon resonance absorption for silver clusters in the polymer matrix generally occurs at a wavelength of ~430 nm.The shift in the plasmon resonance peak towards higher wavelength occurs due to close proximity of the silver clusters.These nanoparticles exhibit unique optical properties originating from the characteristic surface plasmon by the collective motion of conduction electrons.Spectral position, half width and intensity of the plasmon resonance strongly depend on the particle size, shape and the dielectric properties of the particle material and the surrounding medium.Thus, the type of metal and the surrounding dielectric medium play a significant role in the excitation of particle plasmon resonance (PPR).The sensitivity of PPR frequency to small variations of these parameters can be exploited in various applications.The differing natures of the polymeric hosts yield change in dispersion, size distribution and impregnation depth of silver clusters.
The high conductivity and high thermal stability of silver makes it important and favorite among all the metals to produce nanoparticles.Also, it is one of the most important materials in plasmonics [5] .AgNPs are of great importance due to their unique electrical, thermal, catalytic, optical properties and sensing characteristics.All these properties are found suitable for applications in biomolecule detection, immunoassays, surface plasmon optics, data storage, catalysis, surface-enhanced raman scattering, antibacterial material, photonics and photography [6] .But AgNPs have been found to be unstable and agglomerate in aqueous media.This problem can be overcome by immersing these in polymer matrices [7][8][9][10] .Apart from this, AgNPs have been found to be toxic in human cells and ecological system, therefore, encapsulating them in polymer matrices reduce their adverse effects.Embedding AgNPs in various polymer matrices enhances their thermal, optical, mechanical and conducting properties so that they become useful for application in many optical and sensing devices [11] .Therefore, two composition of polymer composites are analyzed here as their application in the sensors.The size of silver clusters in a particular composite is manipulated by the different thickness of silver deposited on them yielding different size of nanoparticles.Synthesis of silver nanoparticles by ex situ methods does not provide homogeneous dispersion into the host polymer composite due to their easy agglomeration.Now, preparing silver nanoparticle of various shape and size in different polymer matrices is possible by different methods.The toxic and potentially hazardous reactants are used in many methods causing environmental disorder.Therefore, using eco friendly methods is the need of the hour.
The vacuum evaporation of metal on to softened polymer composites forms island or discontinuous metal films by stopping the deposition at a very early stage.This is a simplest and eco friendly technique to prepare silver nanoparticles.However such structure faces temporal instability even in vacuum possibly due to mobility of islands followed by coalescence [12] .Also, when these films are exposed to atmosphere they get oxidized.As a result, an irreversible increase in electrical resistance is found due to oxidation of islands of nanoparticles [13] .Various inorganic materials deposited on softened polymer composites are reported in the literature [14][15][16][17] .The surface morphology of subsurface discontinuous metal films depends on thermodynamic as well as deposition parameters [16,17] .The polymer substrats are softened in order to keep control on the viscosity of the substrate so that the silver nanoparticles can be embedded into polymer composites.The polymer metal interaction influences morphology of silver nanoparticles form inside polymer matrices [10,18,19] .The samples used for this study are prepared by vacuum evaporation of silver on polymer composites at high temperature and 10 −6 Torr.The wide range of applications can be created by precise tailoring and optimizing of the nanocomposite structures.
Blending is a process to combine the properties of polymers in order to achieve a desirable polymer system.In the present system of polymers the property of polystyrene (PS) is combined with poly (4-vinyl pyridine) (P4VP).The stability of PS is combined with P4VP in order to achieve small size and uniform distribution silver in PS/P4VP composite.Among the various compositions of polymers suitable polymer composite is explored.The silver nanoparticles embedded into PS/P4VP (50:50, 75:25) have shown room temperature resistances in the range of a few tens to a few hundred MΩ/per sheet desirable for device applications.
The applications of AgNPs for biosensors have been found rarely in the literature as compared to gold nanoparticles (AuNPs).The studies on AgNPs plasmonic biosensors are found even much less.Though most of the sensors operate ex VIVO, toxicity of AgNPs is a major concern.The other limitation is using these particles as bare for biosensing are due to poor stability and unusual surface chemistry [20,21] .To avoid such constraints, coatings of AgNPs were done in order to overcome stability and toxicity of AgNPs in a given environment by a large variety of compounds [22,23] .Also, the aggregation of NPs can be overcome by coating of the NPs which provides electrostatic, steric, or electrosteric repulsive forces between the particles [20] .The various coating methods have been found in literature for covering AgNPs with an organic or an inorganic medium for plasmonic biosensing applications.Embedment of silver into polymer matrices is successful method where silver nanoparticles are dispersed uniformly in polymer composite restricting agglomeration of nanoparticles.A decisive influence about the optical properties of the NPs depends on nature of coating material and its thickness.The unique feature of this hybrid quanta system is exploited in plasmonic systems and devices.The optical sensors are used for refractive index measurement in biomedical, chemical and food processing industries.Thus, the materials scientists are keenly focused on the exploration of ultrasensitive plasmonic detectors for biosensing applications [24,25] .The present study is about plasmonic biosensing response of AgNPs embedded in polymer composites in order to improve electrostatic, steric and electrosteric stabilization of AgNPs.
Experimental
The polymers used in this study are of laboratory grade, detail report is given in previous paper [9] .The structure of P4VP and PS are (a) and (b), respectively, as follows: The various polymer compositions are prepared as described in our previous report [9] .Further, the deposition of silver onto desirable polymer systems, PS/P4VP (50:50) & (75:25) is carried out as described in the paper of Parashar [9] .
In this paper, the optical properties of best samples claimed in our previous results [8] , i.e., PS/P4VP (50:50) for 50 & 95 nm and PS/P4VP (75:25) for 50 & 150 nm were used and their sensor responses are recorded using nanosensor lab software.This computational tool is based on mie theory for investigating the optical response of low-dimensional structures for core-shell nanoparticle and nanomatryoshka and the extinction efficiencies, scattering efficiencies and absorption efficiencies.This tool is designed for refractive index sensor to simulating sensitivity, figure of merit, quality factor, FWHM (full width at half maximum frequency) and scattering of electromagnetic radiations by coreshell spherical nano particles.It has been designed as a virtual laboratory, including a friendly graphical user interface (GUI), an optimization algorithm (to fit the simulations to experimental results) and scripting capabilities.As per our previous results of morphological studies of silver nano particles embedded into above polymer composites revealed that size of nanoparticles are almost spherical.Therefore, design considerations for spherical nanoparticle based simulator panel are used.We have calculated the scattering, absorbstion and extention efficiency by using a flow chart and enter the parameters as following.
1) In geometrical panel the average size of nanopartcle is taken from the previous study on morphology [9] , e.g., for the thickness of 50 nm for silver on PS/P4VP (50:50) R1 = 80 nm.
2) Wavelength range is selected between min 400 nm max 1200 nm and steps 1000.
3) In material section imported Ag data and plot.4) Sensing parameter is selected 'n = 1.33'.5) In study section compute command is given.6) Calculated results get displayed as graphs.
Result and discussion
Humidity sensors based on silver nano particles encapsulated in polymer composite have been mentioned in the past [25,26] .An organic/inorganic nanocomposites of poly (diphenylamine sulfonic acid) (PSDA), 3 mercapto propyl trimethoxy silane (MPTMS), and nano-ZnO is prepared to make thin film humidity sensors.The humidity sensing properties of the sensors were examined by impedance measurements in the range of 100 Hz-1 kHz frequency.The sensitivity of samples increases by three fold as change in value of impedance along with good repeatability and stability in the range from 12% to 95% RH [25] .
Another method reported about Ag/polymer nanocomposite synthesized by a chemical reduction process.The sensing properties are investigated by forming coatings on platinum interdigital electrodes.The sensor gives a reversible, selective and rapid response which is proportional to levels of humidity within the range of 10% RH to 60% RH [26] .
Polymer composites either grafted or adsorbed with NPs, promote uniform dispersity of the NPs when embedded into polymer matrix.The loading of silver on the polymer composite of requisite composition could be evolved as good optical sensors.Our previous study on morphology of AgNPs embedded into polymer composites, PS/P4VP (50:50) & PS/P4VP (75:25) has claimed the homogenous dispersion of silver nanoparticles for the thickness of 50, 95 & 50, 150 nm, respectively [9] .The nature of polymer system and amount of silver deposited on them has been found to be effective on size distribution and dispersion of silver nanoparticles [9] .It has been found that embedment of AgNPs in the polymer composite enhances their properties [10,11] .Thus, a platform is formed for their applications in sensors due to their large surface area and small dimensions.A number of nanocomposites with various amounts of AgNPs could be explored for the fabrication of sensors and biosensors with the efficient nano polymer composite [5] .
The following mechanism explores sensing in silver nanocomposites to enable the fabrication and commercialization of highly selective and specific agents.The influence of AgNP on the parameters such as shape, composition on the sensitivity and selectivity of the sensor is explored.The extinction efficiencies, scattering efficiencies and absorption efficiencies are written as below.
x is the size parameter and the coefficients for scattered fields and are defined as [27] ,
The most relevant sensing parameters are: 1) Quality factor (QF) of scattering resonance peak.
2) Sensitivity (S) and 3) Figure of merit (FOM).
The quality factor of resonant peak is defined as the ratio of resonant wavelength of peak to the full width at half maximum (FWHM) of the resonant peak as [28] QF = λR/FWHM (6) This suggests that the resonant peak with smaller FWHM corresponds to the higher quality factor.The sensitivity (S) of the sensor is defined as the rate of shift of resonant peak wavelength λR, with the variation in the refractive index (n) of surrounding medium, and the same is mathematically written as S = dλR/dn (7) The figure of merit (FOM) of the sensor is directly proportional to the quality factor of the resonant peak and the sensitivity, and the same is expressed as Figures 1-4 shows Q(sca) (scattering efficiency), Q(abs) (absorption efficiency) and Q(ext) (extinction efficiency) verus λ, resonance wavelength for PS/P4VP (50:50) & PS/P4VP (75:25) at the thickness of 50, 95 & 50, 150 nm, respectively for the various values of refractive indices.This is clear from all the figures that the nature of the graphs remains same irrespective of refractive indices.Further, as shown in Figures 1 and 2 scattering efficiency initially increases with increase in resonance wavelength, thereafter, continuously decreases ranging from 400 to 1200 nm.It is evident from Figures 1 and 2 that two peak resonance wavelengths are found for PS/P4VP (50:50) at the thickness of 50 nm whereas three peak wavelengths are found for PS/P4VP (50:50) at the thickness of 95 nm for all the values of refractive indices.The additional peak wavelength can be attributed to increase in the size of nanoparticle from average 80 nm to 95.4 nm.In case of the absorption efficiencies, nature of the graph is the same except the first peak appears to be vanished for all the refractive index measurement in PS/P4VP (50:50) 95 nm whereas two clear peaks appears in PS/P4VP (50:50) 50 nm.The graphs of the absorption efficiencies do not register sharp peaks as clear from Figures 1 and 2. A peak near 400 nm in PS/P4VP (50:50) at the thickness of 95 nm seems to advance and disappear towards higher wavelength but in PS/P4VP (50:50) at the thickness of 50 nm not so sharp peak appears.Similar trends have been found in PS/P4VP (75:25) for 50 & 150 nm (Figures 3 and 4) except sharp & clear peaks visible in PS/P4VP (75:25) for 150 nm in scattering, absorption & extinction efficiencies.The optimum results have been found in PS/P4VP (75:25) for 150 nm as evident from graph and comparative glance on Tables 1-4.In case of all the efficiencies, the peak resonance wavelength shift towards higher wavelength in all the samples for all the values of refractive indices.Tables 1-4 compiled the calculated values of λR, Q(sca), FWHM, QF, S and FOM for all the samples at different values of refractive index, n.This database could be useful for selecting particular optical sensors with requisite peak resonance wavelength.The resonance wavelength, λR is measured from the graph originally obtained by nano sensor software.This is evident that it is the size of silver cluster which is responsible for sensing medium than the embedding polymer composite matrix.Hence, the results of PS/P4VP (50:50) at the thickness of 95 nm and PS/P4VP (75:25) at the thickness of 150 nm are almost same.But, the optimum result of PS/P4VP (75:25) for the thickness of 150 nm is economically viable.
Table 1.The comparison of sensing performance parameters for PS:P4VP/50:50 (organized with the thickness of 50 nm of silver particulate film).The system is considered to be embedded with silver clusters in the polymer composite.Diameter of nanoparticle on an average 80 nm in the sensing medium, 1.33-1.37.Table 2.The comparison of sensing performance parameters for PS:P4VP/50:50 (organized with the thickness of 95 nm of silver).The system is considered to be embedded with silver clusters in the polymer composite.Diameter of nanoparticle on an average 95.4 nm in the sensing medium, 1.33-1.37.Table 3.The comparison of sensing performance parameters for PS:P4VP/75:25 (organized with the thickness of 50 nm of silver).The system is considered to be embedded with silver clusters in the polymer composite.Diameter of nanoparticle on an average 88.6 nm in the sensing medium, 1.33-1.37. Figure 5 shows the variation of peak resonance wavelength.λR versus sensing medium, n for all the chosen samples.It can be seen from graph that the value of λR is higher in case of PS/P4VP (50:50) at the thickness of 95 nm and PS/P4VP (75:25) at the thickness of 150 nm.In both the polymer composite, the size of nano cluster is almost the same, i.e., 95.4 nm & 97 nm.Indeed, it is evident that the size of nano particle is important irrespective of polymer composition of matrix.It has been found that λR increases with increase in refractive index of the sensing medium in the entire samples.Figure 6 displays the scattering efficiency of polymer composite embedded with silver nanoparticles versus sensing medium.This is quite clear that size of the silver cluster is decisive factor for scattering efficiency of sample.The size of nanoparticles embedded in PS/P4VP (75:25) for 150 nm thickness is the maximum hence the scattering efficiency is the lowest.The average size of nanoparticles is minimum in PS/P4VP (50:50) for 50 nm thickness, therefore scattering efficiency is the highest.However, the efficiency of all the samples increases with the increase in sensing medium, n.The full width at half maximum (FWHM) of the resonant peak is plotted against the sensing medium, n in Figure 7.This clearly shows that FWHM for all the samples increases with increasing value of sensing medium.It is highest for PS/P4VP (75:25), 150 nm and lowest for PS/P4VP (50:50), 50 nm.
Conclusions
The optical sensor properties of silver nano polymer composites based on refractive index measurements have yielded good results.These silver nano composites were prepared by vapor deposition of silver on softened polymer composites kept in vacuum of the order of 10 −6 Torr.The previous studies have provided the optimum silver nano composite where silver nanoparticles are uniformly dispersed with an almost same size.The sensing properties are seems to be dependent on size distribution and dispersion of AgNP in polymer matrices.Therefore, the samples used here are expected to give better sensing properties.The database for the sensing parameters is computed in a table for future use.Also, these samples are prepared in thin film forms so good for real time application.The optical sensor results are found optimum in the polymer composite, PS/P4VP (75:25) for 150 nm thickness film.In case of all the efficiencies, the peak resonance wavelength shift towards higher wavelength in all the samples for all the values of refractive indices.Tables 1-4 provide a database for particular application of nano silver polymer composites.
Figure 8
Figure 8 shows the variation of sensitivity, S versus sensing medium for all the samples.It is visible that sensitivity is almost constant for PS/P4VP (75:25) 150 nm & PS/P4VP (50:50) 95 nm.For rest of the two samples, S is variable with sensing medium.
Figure 8 .
Figure 8. S nm-RIU vs. SM.The quality factor versus sensing medium is shown in Figure9for all the samples.QF initially decreases till a minimum, thereafter, increases with sensing medium.QF of PS/P4VP (75:25) for both the thickness 50 & 150 nm is found to be on the better side than QF of PS/P4VP (50:50) for both the thickness 50 & 95 nm.
Table 4 .
The comparison of sensing performance parameters for PS:P4VP/75:25 (organized with the thickness of 150 nm of silver).The system is considered to be embedded with silver clusters in the polymer composite.Diameter of nanoparticle on an average 97 nm in the sensing medium, 1.33-1.37.
|
2023-09-15T15:14:42.622Z
|
2023-09-13T00:00:00.000
|
{
"year": 2023,
"sha1": "37a93deb566c285b2e3647ff8560d25c1ae1c53e",
"oa_license": null,
"oa_url": "https://doi.org/10.24294/jpse.v6i1.2405",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "fee735c0f6060a9fae740fa91ff29912b06adc86",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": []
}
|
219604987
|
pes2o/s2orc
|
v3-fos-license
|
Immunoinformatics and Structural Analysis for Identification of Immunodominant Epitopes in SARS-CoV-2 as Potential Vaccine Targets
A new coronavirus infection, COVID-19, has recently emerged, and has caused a global pandemic along with an international public health emergency. Currently, no licensed vaccines are available for COVID-19. The identification of immunodominant epitopes for both B- and T-cells that induce protective responses in the host is crucial for effective vaccine design. Computational prediction of potential epitopes might significantly reduce the time required to screen peptide libraries as part of emergent vaccine design. In our present study, we used an extensive immunoinformatics-based approach to predict conserved immunodominant epitopes from the proteome of SARS-CoV-2. Regions from SARS-CoV-2 protein sequences were defined as immunodominant, based on the following three criteria regarding B- and T-cell epitopes: (i) they were both mapped, (ii) they predicted protective antigens, and (iii) they were completely identical to experimentally validated epitopes of SARS-CoV. Further, structural and molecular docking analyses were performed in order to understand the binding interactions of the identified immunodominant epitopes with human major histocompatibility complexes (MHC). Our study provides a set of potential immunodominant epitopes that could enable the generation of both antibody- and cell-mediated immunity. This could contribute to developing peptide vaccine-based adaptive immunotherapy against SARS-CoV-2 infections and prevent future pandemic outbreaks.
Introduction
A new coronavirus, severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), has recently emerged as a human pathogen that causes fever, pulmonary disease, and pneumonia [1][2][3]. Following an outbreak that initiated in China, human-to-human infection has spread rapidly across the world. The COVID-19 global pandemic is more severe than previous coronavirus-related outbreaks caused by severe acute respiratory syndrome coronavirus (SARS-CoV) and Middle-East respiratory syndrome coronavirus (MERS-CoV) [4][5][6]. By 30 May 2020, over 6,066,500 people were infected and 367,500 people had died globally from COVID-19. No licensed vaccine is presently available for this disease, although several vaccines are in initial clinical trial stages [7]. Given the magnitude of this international public health emergency, universal vaccines are urgently needed to control the COVID-19 pandemic.
In the postgenomic era, the availability of vast sequence data from pathogens and the advancement of computational prediction tools have greatly facilitated identifying potential immunogenic epitopes in pathogen proteins. This can be useful in designing vaccines against designated pathogens [8,9].
We retrieved the whole genome and proteome of SARS-CoV-2 isolates from different geographic locations from Genbank (NCBI). Protein sequences of SARS-CoV and MERS-CoV were also collected from Genbank. The experimentally determined B-and T-cell epitopes of SARS-CoV were retrieved from the publicly available Immune Epitope Database (IEDB) [20] with the filtering criteria of at least one positive assay: (i) positive B-cell assays, (ii) positive T-cell assays, and (iii) positive MHC binding assays.
Predicting Potential Linear B-cell Epitopes in SARS-CoV-2
Linear B-cell epitopes are peptides with antigenic abilities that are bound by receptors on the surface of B lymphocytes and, thus, generate immune responses [21]. We used multiple approaches to predict the linear B-cell epitopes from the protein sequences of SARS-CoV-2. These included three machine learning-based methods, namely, BepiPred [22], ABCpred [23], and LBtope [24]. BepiPred utilizes data that are obtained from three-dimensional (3D)-structures of the antigen-antibody complex, based on random forests that were trained on the B-cell epitope. We set a cutoff of 0.5 for detecting B-cell epitopes using BepiPred. The ABCpred and LBtope methods are based on artificial neural networks trained on similar B-cell epitope positive data. ABCpred relies on random peptides for the training of negative data, in contrast to LBtope, which uses negative data that are based on experimentally Vaccines 2020, 8,290 3 of 17 validated non-B-cell epitopes from IEDB [20]. We used a cutoff of 0.51 and chose all window lengths of 10-20 for predicting B-cell epitopes using the ABCpred search tool.
Prediction of Potential T-cell Epitopes in SARS-CoV-2
Predicting T-cell epitopes is important for identifying the smallest peptide in an antigen that is able to stimulate CD4 or CD8 T-cells to generate immunogenicity. Thus, the aim here is to identify peptides within antigens that are potentially immunogenic. MHC-peptide binding is considered to be the most important determinant of T-cell epitopes [25]. MHC binds to the antigenic region and becomes more available on the cell surface, where T-cells can recognize them. The accurate prediction of these binders is crucial for efficient vaccine design due to the importance of MHC binders for the activation of T-cells of the immune system [26]. MHC class I and II epitopes were predicted using Tepitool [27], available at IEDB [28]. For predicting MHC class-I epitopes, the parameter for selecting predicted peptides was set as equal or less than 500 median inhibitory concentrations (IC50), while for MHC class-II epitope prediction, the same parameter was set to equal or less than 1000 nM IC50 [29,30]. NetMHCpan-4.0 [31] and nHLAPred [32] were also used to predict the MHC class-I binding epitope, and potential T-cell epitopes were predictedusing CTLPred [33]. CTLPred predicts T-cell epitopes (CTL) from antigen sequences instead of using the intermediate step in which MHC Class I binders are predicted.
Prediction of Protective Antigens
It is important to identify epitopes that are crucial for inducing protection and eliminate others in order to develop peptide-based vaccines. Protective antigens are able to induce an immune response. Thus, Vaxijen V2.0 [34] was used to predict the ability of the predicted SARS-CoV-2 epitopes to protect antigens. The default threshold of Vaxijen V2.0 (0.4) was used to predict the protection potential of antigens.
Analysis of Epitope Conservation and Population Coverage of T-cell Epitopes
An IEDB conservancy analysis tool was utilized in order to analyze the degree of conservation of SARS-CoV-2 B-and T-cell epitopes. The population coverage of T-cell epitopes was analyzed using tools available at the IEDB [20]. The predicted population coverage represents the percentage of individuals within a defined population which are likely to elicit an immune response to a T-cell epitope.
Prediction of Allergenicity, Toxicity and Possibilities of Autoimmune Reactions
The allergenicity of immunodominant epitopes were predicted using AllerTOP v. 2.0 [35] and AlgPred [36]. AllerTOP v. 2.0 classified allergens and non-allergens based on the k-nearest neighbours (kNN) method within an accuracy of 88.7%. AlgPred classified allergens and non-allergens using a hybrid approach (SVMc, IgE epitope, ARPs BLAST, and MAST) within an accuracy of 85%. The toxicity of the epitopes was predicted by means of the ToxinPred [37] web-server, which applies machine learning approaches using different properties of the peptides. Further, we performed a BLAST search (with a criteria of >90% identity) [38] of all of the potential epitopes vs. all the available human antigens from positive B-cell/T-cell/MHC ligand assays for autoimmune diseases in IEDB to determine the risks of potential predicted epitopes triggering a cascade of autoimmune reactions.
Data Collection for Structural Analysis
Peptide epitopes of various lengths (ranging from 7 to 20 residues) which presented on MHC Class I and II molecules were retrieved from SCEptRe (Structural Complexes of Epitope Receptors) [39], AutoPeptiDB [40], and Protein Data Bank (PDB) [41].
Modeling of Epitope MHC-bound Conformations
Backbone conformations of the peptides that were bound to human leukocyte antigen (HLA) proteins were collected from PDB, clustered, and used as structural templates for 3D modeling of the epitopes (identified as immunodominant in the immunoinformatics study). Based on similarities and common structural patterns in HLA-peptide binary complexes, we generated 3D structures of the epitopes that are listed in Table 1, in their bound conformations. The confrontations of peptide side-chains were built using SCWRL [42].
Molecular Docking
Docking grids were generated by the autogrid module of AutoDock4 application [43], using the default values of the van der Waals scaling factor (0.8) and charge cutoff (0.15). A cubic box 35 Å in length was centered on the ligand in the active site of each protein structure. The OpenBabel modules [44] and Chimera v1.11.2 [45] were used to prepare peptides and target HLA proteins for docking. Ionization states were calculated at pH 7.0 ± 2.0. The conformers of peptide molecules were generated and docked to the protein using AutoDock4 and AutoDock Vina [46]. The molecules were docked to the canonical Site1 binding region, and docking conformations with the best scores were analyzed. The epitopes with the most promising characteristics were selected for further analysis and optimization. These characteristics included favorable interactions and top-ranked AutoDock Vina scores, together with acceptable conformations, consistent with peptide recognition by MHC Class I and II structural frameworks.
Identification of Immunodominant Epitopes from the Proteins of SARS-CoV-2
Immunodominant epitopes, which can generate both antibody-and cell-mediated immunity, were identified to generate memory cells against SARS-CoV-2. We first predicted B-and T-cell epitopes and their possible MHC alleles from the SARS-CoV-2 protein using a variety of tools described in the Methods in order determine immunodominant epitopes (Sections 2.1.2 and 2.1.3). All of the B-and T-cell epitopes that were predicted from the different SARS-CoV-2 protein sequences were selected for further analysis. Subsequently, using a combinatorial screening approach, we analyzed all of the predicted B-cell and T-cell epitope (MHC-I and MHC-II) libraries of different lengths, from all protein sequences. The aim was to identify the immunogenic regions that could potentially act as both B-cell and T-cell epitopes. We compared the libraries of predicted B-cell epitopes vs. T-cell epitopes and selected those epitopes with 100% sequence coverage. The lengths of the immunogenic regions were selected based on the maximum coverage of B-cell or T-cell epitopes in the mapped regions. Figure 1 depicts the pipeline used in the study for detecting immunodominant epitopes. We predicted the abilities of the epitopes to serve as protective antigens using Vaxijen and to understand the immunomodulatory effect of epitopes identified from immunogenic regions [34]. Unique epitopes were selected accordingly for further analysis. We identified a total of 17 immunogenic regions from the viral membrane glycoprotein, spike glycoprotein, and nucleocapsid phosphoprotein, onto which both B-cell and T-cell epitopes were mapped. Although immunoinformatics approaches were established to identify potential epitopes from pathogens, some computationally predicted epitopes may not be optimally immunogenic in vivo. Therefore, it is necessary to test the predicted epitopes in vivo to ensure that they can generate B-cell and/or T-cell responses. Detailed understanding of protective immune responses against SARS-CoV might be presumably important for developing a vaccine against SARS-CoV-2 [47]. For this reason, the 100% identical and experimentally confirmed epitopes between SARS-CoV and SARS-CoV-2 were chosen in this study. Accordingly, we mapped all of the epitopes that were predicted from the 17 regions of three proteins of SARS-CoV-2 with the experimentally validated epitopes of SARS-CoV, and only selected the 100% identical epitopes. The lengths of the epitopes were adjusted based on the mapped experimentally-determined epitopes of SARS-CoV. To define the immunodominant epitopes, the core parts of both B-cell and T-cell epitopes were verified within those mapped epitope sequences. Finally, we found 15 potential immunogenic regions of SARS-CoV-2 that explicitly include 25 mapped immunodominant epitopes, which can generate immune responses by both B-cells and Tcells (Table 1, Figure 2A-C, and Table S1).
Interestingly, the mapping of immunogenic regions onto the structure of SARS-CoV-2 spike glycoprotein ( Figure 2C) revealed a number of potential epitopes that were not exposed to solvent (Tables S2 and S3). For example, the beta-strand spanning Val1060-Val1068, composed of hydrophobic residues (VVFLHVTYV), is not a solvent-accessible region in the multi-subunit spike glycoprotein ( Figure 2D). Indeed, the solvent-accessible surface area (SASA) was estimated to be ~0 We predicted the abilities of the epitopes to serve as protective antigens using Vaxijen and to understand the immunomodulatory effect of epitopes identified from immunogenic regions [34]. Unique epitopes were selected accordingly for further analysis. We identified a total of 17 immunogenic regions from the viral membrane glycoprotein, spike glycoprotein, and nucleocapsid phosphoprotein, onto which both B-cell and T-cell epitopes were mapped. Although immunoinformatics approaches were established to identify potential epitopes from pathogens, some computationally predicted epitopes may not be optimally immunogenic in vivo. Therefore, it is necessary to test the predicted epitopes in vivo to ensure that they can generate B-cell and/or T-cell responses. Detailed understanding of protective immune responses against SARS-CoV might be presumably important for developing a vaccine against SARS-CoV-2 [47]. For this reason, the 100% identical and experimentally confirmed epitopes between SARS-CoV and SARS-CoV-2 were chosen in this study. Accordingly, we mapped all of the epitopes that were predicted from the 17 regions of three proteins of SARS-CoV-2 with the experimentally validated epitopes of SARS-CoV, and only selected the 100% identical epitopes. The lengths of the epitopes were adjusted based on the mapped experimentally-determined epitopes of SARS-CoV. To define the immunodominant epitopes, the core parts of both B-cell and T-cell epitopes were verified within those mapped epitope sequences. Finally, we found 15 potential immunogenic regions of SARS-CoV-2 that explicitly include 25 mapped immunodominant epitopes, which can generate immune responses by both B-cells and T-cells (Table 1, Figure 2A-C, and Table S1).
Interestingly, the mapping of immunogenic regions onto the structure of SARS-CoV-2 spike glycoprotein ( Figure 2C) revealed a number of potential epitopes that were not exposed to solvent (Tables S2 and S3). For example, the beta-strand spanning Val1060-Val1068, composed of hydrophobic residues (VVFLHVTYV), is not a solvent-accessible region in the multi-subunit spike glycoprotein ( Figure 2D). Indeed, the solvent-accessible surface area (SASA) was estimated to be~0 for all residues of this epitope, with the only exception of theVal1068 (SASA~24 A 2 , Table S2). This region contrasts with the nearby region of another epitope, Asp663-Leu680 (DIPIGAGICASYHTVSLL , Table 1), which was mostly exposed to solvent ( Figure 2E, Table S2). This implies the "recognition-after-proteolysis" pathway of protein interactions with the immune system. The region Val1060-Val1068 (orange beta-strand) of the spike glycoprotein (green cartoon) is mostly composed of hydrophobic residues (VVFLHVTYV) which are not exposed to solvent. (E) Residues Asp663-Leu680 (DIPIGAGICASYHTVSLL, blue) of the spike glycoprotein (green cartoon) are mostly solvent-exposed, with the exception of Cys671 and Ala672 (Table S2).
Analysis of Viral Mutations within the Potential Epitope Regions
Selection pressure of the human immune system has been shown to drive viral point mutations that evade immune surveillance [48]. Therefore, patterns of mutational events need to be examined in order to understand the epitope escape that is important for the transmission of viruses between different sub-populations. Potential immunogenic epitopes with a low chance of mutation are thus optimal candidates for generating effective vaccines. We analyzed mutations within the immunodominant epitopes identified in SARV-CoV-2 isolates from different geographic locations. We found a few single point mutations within the immunodominant regions of a few SARS-CoV-2 sequences isolates from the USA (Figure 3). Despite the low number of point mutations in the immunodominant epitopes, they reflect the severity of mutated viral genomes within the American population. Our observations highlight that immune pressure-induced genetic drifts play an important role in the evolution of SARS-CoV-2. This might be essential for evading immune surveillance by the host. The correlation of patterns of mutations and human immune pressureinduced genetic evolution of SARS-CoV-2 will be understood in detail with the availability of more sequenced viruses from different countries. The region Val1060-Val1068 (orange beta-strand) of the spike glycoprotein (green cartoon) is mostly composed of hydrophobic residues (VVFLHVTYV) which are not exposed to solvent. (E) Residues Asp663-Leu680 (DIPIGAGICASYHTVSLL, blue) of the spike glycoprotein (green cartoon) are mostly solvent-exposed, with the exception of Cys671 and Ala672 (Table S2).
Analysis of Viral Mutations within the Potential Epitope Regions
Selection pressure of the human immune system has been shown to drive viral point mutations that evade immune surveillance [48]. Therefore, patterns of mutational events need to be examined in order to understand the epitope escape that is important for the transmission of viruses between different sub-populations. Potential immunogenic epitopes with a low chance of mutation are thus optimal candidates for generating effective vaccines. We analyzed mutations within the immunodominant epitopes identified in SARV-CoV-2 isolates from different geographic locations. We found a few single point mutations within the immunodominant regions of a few SARS-CoV-2 sequences isolates from the USA (Figure 3). Despite the low number of point mutations in the immunodominant epitopes, they reflect the severity of mutated viral genomes within the American population. Our observations highlight that immune pressure-induced genetic drifts play an important role in the evolution of SARS-CoV-2. This might be essential for evading immune surveillance by the host. The correlation of patterns of mutations and human immune pressure-induced genetic evolution of SARS-CoV-2 will be understood in detail with the availability of more sequenced viruses from different countries. Vaccines 2020, 8, x 9 of 17
Population Coverage of Immunodominant Epitopes
Human leukocyte antigens (HLAs) are the most polymorphic genes in humans, and their allele distributio and expression vary by ethnic group and geographical location. The classical HLA loci are class I (HLA-A, B, C, E, F and G) and class II (HLA-DR, DQ, DM and DP) molecules, which provide antigen presentation to CD8 and CD4 T-cells [49]. Therefore, the identification of epitopes that can be recognized by multiple HLA alleles and cover most of the world's population is important for the development of successful vaccines. Thus, we analyzed population coverage by HLAs of all of the epitopes from the immunogenic regions of SARS-CoV-2 using the IEDB population coverage analysis tool [20]. We identified seven epitopes from five immunogenic regions, which cover more than 87% of the world's population (Table 2). Among these seven potential immunodominant epitopes, six are 17 amino acids in length. We found that the residue 891-918 region of the spike glycoprotein contains three potential immunodominant epitopes. Of these, two have world population coverages of 97.46% and 92.52%, respectively. Similarly, the residue 292-330 region of the nucleocapsid phosphoprotein contains three potential immunodominant epitopes. Of these, two have 87.42% and 92.81% world population coverages, respectively. These results indicate that the seven immunodominant epitopes could be potential candidates for designing vaccines against SARS-CoV-2 that can cover almost the entire world population.
Population Coverage of Immunodominant Epitopes
Human leukocyte antigens (HLAs) are the most polymorphic genes in humans, and their allele distributio and expression vary by ethnic group and geographical location. The classical HLA loci are class I (HLA-A, B, C, E, F and G) and class II (HLA-DR, DQ, DM and DP) molecules, which provide antigen presentation to CD8 and CD4 T-cells [49]. Therefore, the identification of epitopes that can be recognized by multiple HLA alleles and cover most of the world's population is important for the development of successful vaccines. Thus, we analyzed population coverage by HLAs of all of the epitopes from the immunogenic regions of SARS-CoV-2 using the IEDB population coverage analysis tool [20]. We identified seven epitopes from five immunogenic regions, which cover more than 87% of the world's population ( Table 2). Among these seven potential immunodominant epitopes, six are 17 amino acids in length. We found that the residue 891-918 region of the spike glycoprotein contains three potential immunodominant epitopes. Of these, two have world population coverages of 97.46% and 92.52%, respectively. Similarly, the residue 292-330 region of the nucleocapsid phosphoprotein contains three potential immunodominant epitopes. Of these, two have 87.42% and 92.81% world population coverages, respectively. These results indicate that the seven immunodominant epitopes could be potential candidates for designing vaccines against SARS-CoV-2 that can cover almost the entire world population. Table 2. Epitopes with more than 85% world population coverage.
Analysis of Allergenicity, Toxicity and Autoimmune Reactivity
Epitope allergenicity is a prominent obstacle for vaccine development. We thus verified that the identified epitopes are not allergens. The allergenicity analysis results of the seven immunodominant epitopes (Table 2) highlighted that six of these epitopes were not predicted as allergens using bothAllerTOP [35] and AlgPred [36]. Only one epitope ("FIEDLLFNKVTLADAGF") was predicted as an allergen by AllerTOP, whereas the AlgPred method predicted it as a non-allergen. Therefore, the proper classification of allergens was not possible for this epitope due to the limitation of computational prediction methods. Toxicity profiling of these predicted epitopes revealed that all were safe and possibly non-toxic. Epitope spreading is a process where diversification of the immune response is induced by an antigen to meet both B-cell and T-cell specificities during a chronic autoimmune or infectious response [50,51]. Thus, we analyzed the possibility that the seven predicted immunodominant epitopes (Table 2) would generate autoimmune reactions. For this purpose, we performed a BLAST search of our epitopes against the database of epitope sequences of human antigens for autoimmune diseases, which were validated by positive B-cell/T-cell/MHC ligand assays. Consequently, we found that none of the human epitopes for autoimmune disease share significant sequence identity with our predicted SARS-CoV-2 immunodominant epitopes ( Table 2). This result indicates that the seven epitopes have a very low risk for generating autoimmune reactions in humans.
Structural Analysis and Modeling of Epitope Presentation by MHC Class I and II Systems
Epitopes are faced with extremely complex and competitive environments that include the multitude of HLA proteins that bind immunogenic peptides with different affinities, and present selected epitopes to surface receptors on immune cells. Therefore, we performed molecular docking analysis to understand the binding interactions of the identified immunodominant epitopes with human MHC complexes.
Structures of different HLA-peptide complexes from MHC class I and II were collected and aligned, as described in Methods. Structures of HLAs are fairly similar within each group (I and II) and share the same canonical fold. The epitopes were clustered in similar conformations in the HLA antigen binding grooves created by two helices in parallel orientation ( Figure 4A,B). For the most part, backbone "traces" of peptides were similar ( Figure 4A). The N-and C-termini occupied essentially the same positions inpockets A and F of HLA binding sites ( Figure 4C,D). This suggests that conformational flexibility was mostly concentrated in the middle part of the epitope sequences, whereas the motion of terminal residues was restricted, in agreement with the possibility of "bulged" conformations. Based on these similarities and common canonical structural properties in HLA-peptide binary complexes, we generated 3D structures of the epitopes that are listed in Table 1 in their bound conformations. These epitope molecules were built using~150 residue backbone templates taken from epitope structures that were collected in SCEptRe (Table S4) and AutoPeptiDB (Table S5) (6) peptide-HLA-BCR (MHC II). In this study, types 1, 2, and 3 were considered. We modeled the binding of the epitopes to different HLA proteins from MHC class I and II, and to HLA-TCR (MHC I). In the peptide-HLA-TCR type of binding, the docking scores were mostly higher (as compared to the binary peptide-HLA complexes). This was because epitope molecules were confined to the interface area between their cognate HLA/TCR proteins ( Figure 4C). This mode of binding implies that that N-and C-termini are bound to the HLA surface, whereas middle residues interact with TCR.
Using the crystal structure of the nonapeptide KTFPPTEPK bound to HLA-A*1101 (PDB ID 1x7q) as the reference state, we performed an extensive conformational sampling and docking study of this complex. We demonstrated that the top-score docking peptide conformations were clustered around the native conformation, with an estimated energy −9.97 kcal/mol (corresponding to the nanomolar affinity range). Moreover, we found similar binding energies (~−9.5 kcal/mol) in docking simulations of KTFPPTEPK binding with HLA-A*02:01 (epitopes from Table 1, Table S6). Therefore, the computational protocol we used (see Methods) enabled: (1) the generation of a library of immunogenic sequences, and (2) structure-based selection of appropriate candidates using docking to multiple HLA structural templates. This approach was applied to all of the epitopes listed in Table 1. Some of these immunogenic sequences constitute overlapping sites. For example, the sequence of the reference nonapeptide (KTFPPTEPK) was identical to region Lys362-Lys370 in the SARS-CoV nucleocapsid protein. In the SARS-CoV-2 variant, this motif was predicted in the epitope sequences LNKHIDAYKTFPPTEPK, KHIDAYKTFPPTEPKKDKKK, and YKTFPPTEPKKDKKKK, corresponding to positions Lys361 to Lys369 ( Figure 4A, sky-blue area on the nucleocapsid protein surface). The nonapeptide KTFPPTEPK has demonstrated high-affinity binding to the protein from MHC Class I, whereas its interaction with the HLA-DRB1 (from MHC Class II) is less pronounced (estimated binding energy is~−6-7 kcal/mol). Vice versa, extended peptides LNKHIDAYKTFPPTEPK (length 17), KHIDAYKTFPPTEPKKDKKK (length 20), and YKTFPPTEPKKDKKKK (length 16) do not fit HLA binding sites in HLAs from MHC Class I. Interestingly, we found that the core part (KTFPPTEPK) of the LNKHIDAYKTFPPTEPK peptide can bind to the recognition site of HLA proteins from MHC Class I (~-7-8 kcal/mol), whereas the N-terminal part of this 17-residue peptide is arranged outside the A-pocket. The C-terminal part was found to occupy the F-pocket of the binding site ( Figure 4D). In agreement with the well-known binding mode in the peptide-MHC class II system, the 17-residue peptide LNKHIDAYKTFPPTEPK demonstrated high-affinity docking scores, −9-10 kcal/mol, in interaction with DRB1 proteins. Accordingly, our molecular docking studies imply that peptides consisting of 9-11 amino acids were mostly recognized by MHC Class I molecules, whereas longer sequences tend to target the MHC Class II system ( Figure S1). We predicted the MHC-I processing of identified immunodominant epitopes ( Table 1) for all of the available MHC alleles of HLA-A, HLA-B, and HLA-Cusing the IEDB tool (http://tools.iedb.org/processing/) [20], and found that all of the immunodominant epitopes can undergo further proteolysis and recognition by MHC class I molecules (considering a processing score >1). Therefore, the core part of immunodominant epitopes with longer sequence lengths can be presented by MHC class I molecules after proteasomal processing. epitopes ( Table 1) for all of the available MHC alleles of HLA-A, HLA-B, and HLA-Cusing the IEDB tool (http://tools.iedb.org/processing/) [20], and found that all of the immunodominant epitopes can undergo further proteolysis and recognition by MHC class I molecules (considering a processing score >1). Therefore, the core part of immunodominant epitopes with longer sequence lengths can be presented by MHC class I molecules after proteasomal processing.
Discussion
Vaccination is an effective way to improve public health by building up adaptive immunity to a target pathogen [52]. However, it takes considerable time to screen vaccine targets for clinical validation and the production of a vaccine. Advances in bioinformatics and next-generation sequencing technology, immunoinformatics, and reverse vaccinology can minimize the time for screening antigens from protein sequences of pathogens and offer advantages in the search for potential new vaccine targets [53,54]. Several antiviral drugs have been tested against COVID-19, however, none of these drugs proved to be completely effective against the disease [55]. The current global emergency of the COVID-19 outbreak urgently calls for a vaccine against SARS-CoV-2 [56]. Therefore, identifying which part of the sequence of SARS-CoV-2 proteins that can generate an immune response in humans will facilitate designing a vaccine against this viral pathogen [57]. While a few genetic variations exist between SARS-CoV and SARS-CoV-2, these viruses are more than 85% identical in their genomic sequences [58]. When considering the high genetic similarity between SARS-
Discussion
Vaccination is an effective way to improve public health by building up adaptive immunity to a target pathogen [52]. However, it takes considerable time to screen vaccine targets for clinical validation and the production of a vaccine. Advances in bioinformatics and next-generation sequencing technology, immunoinformatics, and reverse vaccinology can minimize the time for screening antigens from protein sequences of pathogens and offer advantages in the search for potential new vaccine targets [53,54]. Several antiviral drugs have been tested against COVID-19, however, none of these drugs proved to be completely effective against the disease [55]. The current global emergency of the COVID-19 outbreak urgently calls for a vaccine against SARS-CoV-2 [56]. Therefore, identifying which part of the sequence of SARS-CoV-2 proteins that can generate an immune response in humans will facilitate designing a vaccine against this viral pathogen [57]. While a few genetic variations exist between SARS-CoV and SARS-CoV-2, these viruses are more than 85% identical in their genomic sequences [58]. When considering the high genetic similarity between SARS-CoV and SARS-CoV-2, a few recent studies identified all of the completely identical B-cell and T-cell epitopes from the SARS-CoV-2, based on the experimentally-determined SARS-CoV epitopes that are present in the [47,59]. However, knowledge is still lacking before a full picture on SARS-CoV-2 epitopes that could have immunomodulatory effects in humans can be presented [60].
In this present study, we exploited immunoinformatics-based approaches to identify potential immunodominant epitopes from SARS-CoV-2, which could be useful for developing vaccines for the COVID-19 disease. The vaccines should be capable of activating both humoral and cellular immune responses in humans. Our approach to defining immunodominant epitopes entails identification of overlapping regions of B-cell and T-cell epitopes (MHC-I and MHC-II) from proteins of SARS-CoV-2, particularly at those sites where these epitopes are 100% identical to the experimentally-validated epitopes of SARS-CoV. We identified 15 potential immunogenic regions from three proteins of SARS-CoV-2, and mapped 25 epitopes that are 100% identical to experimentally validated SARS-CoV epitopes. Among 25 potential immunodominant epitopes identified containing 9-28 amino acid residues, the lengths of most of the epitopes were 16-18 residues. To understand the binding patterns of epitopes with MHC-I and MHC-II, we performed structural and molecular docking analyses. We found that in the library of our immunogenic sequences, epitopes 9-11 residues in length were mostly recognized by HLA proteins from MHC Class I, whereas longer epitopes tended to bind to MHC Class II proteins with higher affinities. This finding is in agreement with known canonical preferences. Further analysis of MHC class I processing reveals that epitopes of longer sequences can undergo proteasomal processing and that the core part of the region for MHC class I recognition within the epitope can be presented on the cell surface for surveillance by CD8 T-cells. An analysis of the population coverage by HLAs revealed seven epitopes among the predicted 25 immunodominant epitopes that are found in more than 87% of the global infected population, and show high binding affinity to MHC-I and MHC-II, as evidenced from structural and docking analysis. Furthermore, these seven epitopes were predicted as being non-allergen, non-toxic, and of low risk of triggering autoimmune responses, which highlights their potential as successful vaccine targets. The viral epitopes that are least likely to mutate should be selected in order to develop an effective vaccine. Thus, we analyzed available SARS-CoV-2 genomes from various geographic locations to identify the percentage of mutations in suggested epitope regions. We found evidence of point mutations in a few epitopes of SARS-CoV-2 isolates from the USA. This suggests that human immune pressure-induced genetic drift plays a central role in the genetic adaptation of SARS-CoV-2. Interestingly, we did not find any point mutations in the mentioned seven potentially immunodominant epitopes. This result indicates that these seven epitopes are potentially effective vaccine candidates. Hence, the development of vaccines using these seven immunodominant epitopes could activate both humoral and cellular immune responses in humans, and that these epitopes could cover almost all of the worldwide population. Our results thus offer important insight for the development of a peptide vaccine for COVID-19.
Conclusions
The COVID-19 outbreak is an emerging threat across the globe. Despite this, there are currently no permanent antiviral drugs or vaccine reported for fighting this disease. In the present study, we identified immunodominant epitopes from SARS-CoV-2 proteins that could induce both humoral and cell-mediated immune responses in humans, using the most comprehensive immunoinformatics approaches. Molecular docking of the immunodominant epitopes with HLA alleles supports their higher binding affinities within different HLA alleles. Further, seven potential immunodominant epitopes were shortlisted based on their higher conservancy, higher global population coverage, and significant interaction to MHC class I and class II alleles with high affinity. These epitopes have a low risk of being allergen, toxic, or generating autoimmune reactions. These finding highlight that these seven immunodominant epitopes could be the potential vaccine targets against SARS-CoV-2. The computational approaches that were used in this study could be a benchmark for the identification of immunodominant epitopes from other emerging pathogens, particularly, coronaviruses, in order to develop potential universal vaccines against various new strains.
Supplementary Materials:
The following are available online at http://www.mdpi.com/2076-393X/8/2/290/s1. Figure S1: Typical binding mode of an elongated epitope in the HLA protein from MHC class II. Table S1: Details of all the predicted immunodominant epitopes of SARS-CoV-2, Table S2: SASA values calculated for the epitopes presented in Figure 2D,E, Table S3: SASA values calculated for all residues of the SAR-CoV-2 Spike glycoprotein, Table S4: Epitopes retrieved from SCEptRe which was used as backbone templates for structural analysis, Table S5: Epitopes retrieved from AutoPeptiDB which were used as backbone templates for structural analysis, Table S6: Identification of the binding core part in potential immunogenic epitopes, and binding/docking energies in HLA proteins from MHC Class I and II.
|
2020-06-11T09:05:32.107Z
|
2020-06-01T00:00:00.000
|
{
"year": 2020,
"sha1": "7023d4bd01b13f0cf2f4da079ed539fd295b2cd8",
"oa_license": "CCBY",
"oa_url": "https://europepmc.org/articles/pmc7350000?pdf=render",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "58a1e5b3472e77138587b975af568fb7c0071090",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
220520100
|
pes2o/s2orc
|
v3-fos-license
|
Mechanical Properties of Treadmill Surfaces Compared to Other Overground Sport Surfaces
The mechanical properties of the surfaces used for exercising can affect sports performance and injury risk. However, the mechanical properties of treadmill surfaces remain largely unknown. The aim of this study was, therefore, to assess the shock absorption (SA), vertical deformation (VD) and energy restitution (ER) of different treadmill models and to compare them with those of other sport surfaces. A total of 77 treadmills, 30 artificial turf pitches and 30 athletics tracks were assessed using an advanced artificial athlete device. Differences in the mechanical properties between the surfaces and treadmill models were evaluated using a repeated-measures ANOVA. The treadmills were found to exhibit the highest SA of all the surfaces (64.2 ± 2; p < 0.01; effect size (ES) = 0.96), while their VD (7.6 ± 1.3; p < 0.01; ES = 0.87) and ER (45 ± 11; p < 0.01; ES = 0.51) were between the VDs of the artificial turf and track. The SA (p < 0.01; ES = 0.69), VD (p < 0.01; ES = 0.90) and ER (p < 0.01; ES = 0.89) were also shown to differ between treadmill models. The differences between the treadmills commonly used in fitness centers were much lower than differences between the treadmills and track surfaces, but they were sometimes larger than the differences with artificial turf. The treadmills used in clinical practice and research were shown to exhibit widely varying mechanical properties. The results of this study demonstrate that the mechanical properties (SA, VD and ER) of treadmill surfaces differ significantly from those of overground sport surfaces such as artificial turf and athletics track surfaces but also asphalt or concrete. These different mechanical properties of treadmills may affect treadmill running performance, injury risk and the generalizability of research performed on treadmills to overground locomotion.
Introduction
Treadmills are widely used in different settings including sports training, exercise testing, rehabilitation and research [1]. Although it is frequently assumed that locomotion on a treadmill is a surrogate for ground locomotion, there is controversy as to the comparability of the biomechanical, physiological, perceptual or performance outcomes between the two conditions [1][2][3].
Insufficient familiarization and a lack of air resistance can make treadmill running differ from running overground [4][5][6]. However, there is recent meta-analytical evidence that differences can still be found between the two conditions independent of previous familiarization [3] and that the effect of air resistance becomes a signifcant confounder only at relatively high running speeds-approximately above 16 km/h, which is actually faster than the speeds used in most studies in the feld [1]. Factors other than familiarization or air resistance might thus be involved. In this regard, the role of the belt dimensions and intra-belt speed fuctuations remains largely unclear but might be relatively small for modern treadmills with strong driving mechanisms that provide minimal intra-stride belt speed variability, including high-quality research-based treadmills [3]. On the other hand, the controversy in the feld regarding the comparison of treadmill vs. overground running could also be caused by dissimilarities in the mechanical properties of the running surfaces used in the different studies [2,3,7,8]. Indeed, treadmills' mechanical properties have an important infuence-and in fact, greater than that of the lack of air resistance-on physiological responses [2,9] and can also affect running biomechanics [3], since athletes adjust their leg stiffness and dynamics when running on surfaces with different mechanical properties [10][11][12][13].
Although the mechanical properties of many sport surfaces (e.g., artifcial turf pitches, athletics tracks, sports hall foors, tennis courts and gymnastic crash mats) are frequently assessed to ensure they meet the criteria established by sport international federations and other governing bodies [14], this is not the case for treadmill surfaces, for which there are yet no standardized criteria. In this sense, current regulations (both European and American) defne constructive and general safety aspects without any mention of the mechanical properties of the surface [15][16][17]. The same limitation applies to the bulk of scientifc research comparing treadmill and overground locomotion [3].
Assessing the mechanical properties of treadmill surfaces is therefore an important issue, not only in sports but also from a clinical perspective. Indeed, treadmill surfaces' mechanical properties have a signifcant infuence on peak plantar forces and metabolic energy consumption [8,18], and treadmill running has been associated with a lower risk of developing tibial stress fractures but an increased risk of overload injuries at the Achilles tendon compared to overground running [19][20][21], due to altered lower-extremity kinetics and kinematics.
Generally, regulations require that the three main mechanical properties of sports surfaces-shock absorption (SA), vertical deformation (VD) and energy restitution (ER)-are evaluated [22,23]. However, the few studies that have characterized treadmills' mechanical properties in any way have mainly focused on surface stiffness [18,24]. Although stiffness is closely related to VD, it provides little information regarding SA and ER. In this context, and given that the mechanical properties of treadmills remain largely unknown, the main purpose of this study was to characterize SA, VD and ER among different treadmill models designed for ftness, research and rehabilitation purposes, and to compare the results with those obtained for other man-made surfaces typically used in sports-artifcial turf and athletics track surfaces. In addition, the relationship between the different mechanical properties can provide a more comprehensive understanding of the behavior of the surface and its infuence on athletes. Although these relationships have been previously studied in overground surfaces, they remain largely unknown for treadmills. Therefore, a second aim was to assess the relationship between SA, VD and ER and whether this relationship remained consistent across surfaces.
Sample
A total of 77 treadmills, 30 artifcial turf pitches and 30 track and feld tracks were included in the study. The treadmills comprised 70 conventional fat treadmills from ftness centers (ft-TR), 6 non-instrumented treadmills from different research laboratories (lab-TR), and one curved non-motorized treadmill (NM-TR) ( Table 1). Artifcial turf and track samples were selected randomly from a database of feld tests performed by a certifed laboratory.
Procedures
We assessed SA, VD, and ER with an advanced artifcial athlete (AAA) device (Wireless Value; Emmen, The Netherlands) that consists of a mechanical drop test simulating the support of an athlete's foot on the ground. The characteristics of the apparatus are thoroughly described in Section 12 of current FIFA standards [23], the model used here being a wireless handheld device that provided ease of operation and simple and fast measurements. Artifcial turf and track surfaces were assessed at different locations in accordance with current FIFA and World Athletics protocols, respectively [23,25]. For that, we performed three repetitions of the drop test at each test location, with intervals of 30 ± 5 s. We discarded the results of the frst test and calculated the mechanical properties of each location as the mean values of the second and third tests. The treadmills were assessed at three points as described elsewhere [26], performing only one drop test per location. For each surface included in the study, we calculated the SA, VD, and ER as the mean values of all the test locations.
Statistical Analysis
Data are presented as means and standard deviations (SDs). We used the Kolmogorov-Smirnov and Levene's test to check the normality of the data distribution and homogeneity of variances, respectively. We compared mechanical properties across the three types of surfaces (ft-TR, artifcial turf and athletics track) with a one-way analysis of variance (ANOVA) test, with the Bonferroni test used for post hoc pairwise comparisons. We used the same approach to compare the mechanical properties within the different ft-TR models. We calculated the effect size for the group effect (ES) with 2 the partial Eta-squared (η p 2 ) value with the following interpretation: small (η p = 0.01-0.059), medium 2 2 (η p = 0.06-≥ 0.14) and large effects (η p > 0.14). Finally, we also calculated the Pearson's correlations between the three mechanical properties within each type of surface. We used the statistical software SPSS V24.0 for Windows and set the level of signifcance at p < 0.05.
Results
We excluded lab-TR and NM-TR data from the analyses, as they did not follow the premises of normal distribution and homogeneity of variances. The results for these treadmills are shown for information in the graphical analysis ( Figure 1).
Results
We excluded lab-TR and NM-TR data from the analyses, as they did not follow the premises of normal distribution and homogeneity of variances. The results for these treadmills are shown for information in the graphical analysis ( Figure 1).
When comparing the overall differences in the mechanical properties across the three types of surfaces (fit-TR, artificial turf, and track and field) we found a significant group (i.e., "type of surface") effect for SA, VD and ER (Table 2). In post hoc pairwise comparisons, SA was lower in track than in the other two surfaces (p < 0.001 vs. both fit-TR and artificial turf) and lower in artificial turf than in fit-TR (p = 0.001). VD was also lower in track than in the other two surfaces (p < 0.001 vs. fit-TR and artificial turf, respectively) and lower in fit-TR than in artificial turf (p < 0.001). By contrast, ER was higher in track than in the other two surfaces (p < 0.001 vs. fit-TR and artificial turf) and also lower in artificial turf than in fit-TR (p = 0.002). Data are mean (±) SD. Abbreviations: ER, energy restitution; SA, shock absorption; VD, vertical deformation. Symbols: Ŧ p < 0.05 vs. treadmill; * p < 0.05 vs. artificial turf. Of note, World Athletics states that the artificial athlete (AA) device should be used instead of the advanced artificial athlete (AAA) to assess the mechanical properties of track surfaces. The equivalence between both test apparatus has been previously described [22]. Thus, the above reported values for track surfaces (which were obtained using the AAA) would be equivalent to SA and VD values of ≈35.5% and ≈ 1.73 mm, respectively, when assessed with the AA. Table 3 shows the differences between the six fit-TR models, revealing a significant group effect for SA, VD and ER. The treadmill models of the brand Life Fitness (LF97T and LFDX) displayed higher values of SA, VD and ER compared to the other treadmills (p < 0.01 for all cases), while the Precor model (PRE956I) showed the lowest values of VD and ER (p < 0.05 for all cases), with no significant differences in SA compared to the Technogym models.
Sensors 2020, 20, x FOR PEER REVIEW
Results
We excluded lab-TR and NM-TR data from the analyses, as they did not follow the pre normal distribution and homogeneity of variances. The results for these treadmills are sh information in the graphical analysis ( Figure 1).
When comparing the overall differences in the mechanical properties across the three surfaces (fit-TR, artificial turf, and track and field) we found a significant group (i.e., " surface") effect for SA, VD and ER (Table 2). In post hoc pairwise comparisons, SA was lower than in the other two surfaces (p < 0.001 vs. both fit-TR and artificial turf) and lower in artif than in fit-TR (p = 0.001). VD was also lower in track than in the other two surfaces (p < 0.00 TR and artificial turf, respectively) and lower in fit-TR than in artificial turf (p < 0.001). By ER was higher in track than in the other two surfaces (p < 0.001 vs. fit-TR and artificial turf) lower in artificial turf than in fit-TR (p = 0.002). Table 3 shows the differences between the six fit-TR models, revealing a significant grou for SA, VD and ER. The treadmill models of the brand Life Fitness (LF97T and LFDX) displaye values of SA, VD and ER compared to the other treadmills (p < 0.01 for all cases), while th model (PRE956I) showed the lowest values of VD and ER (p < 0.05 for all cases), with no sig differences in SA compared to the Technogym models.
Results
We excluded lab-TR and NM-TR data from the analyses, as they did not follow the premises of normal distribution and homogeneity of variances. The results for these treadmills are shown for information in the graphical analysis ( Figure 1).
When comparing the overall differences in the mechanical properties across the three types of surfaces (fit-TR, artificial turf, and track and field) we found a significant group (i.e., "type of surface") effect for SA, VD and ER (Table 2). In post hoc pairwise comparisons, SA was lower in track than in the other two surfaces (p < 0.001 vs. both fit-TR and artificial turf) and lower in artificial turf than in fit-TR (p = 0.001). VD was also lower in track than in the other two surfaces (p < 0.001 vs. fit-TR and artificial turf, respectively) and lower in fit-TR than in artificial turf (p < 0.001). By contrast, ER was higher in track than in the other two surfaces (p < 0.001 vs. fit-TR and artificial turf) and also lower in artificial turf than in fit-TR (p = 0.002). Data are mean (±) SD. Abbreviations: ER, energy restitution; SA, shock absorption; VD, vertical deformation. Symbols: Ŧ p < 0.05 vs. treadmill; * p < 0.05 vs. artificial turf. Of note, World Athletics states that the artificial athlete (AA) device should be used instead of the advanced artificial athlete (AAA) to assess the mechanical properties of track surfaces. The equivalence between both test apparatus has been previously described [22]. Thus, the above reported values for track surfaces (which were obtained using the AAA) would be equivalent to SA and VD values of ≈35.5% and ≈ 1.73 mm, respectively, when assessed with the AA.
Results
We excluded lab-TR and NM-TR data from the analyses, as they did not follow the prem normal distribution and homogeneity of variances. The results for these treadmills are sho information in the graphical analysis ( Figure 1).
When comparing the overall differences in the mechanical properties across the three t surfaces (fit-TR, artificial turf, and track and field) we found a significant group (i.e., " surface") effect for SA, VD and ER (Table 2). In post hoc pairwise comparisons, SA was lower than in the other two surfaces (p < 0.001 vs. both fit-TR and artificial turf) and lower in artifi than in fit-TR (p = 0.001). VD was also lower in track than in the other two surfaces (p < 0.001 TR and artificial turf, respectively) and lower in fit-TR than in artificial turf (p < 0.001). By c ER was higher in track than in the other two surfaces (p < 0.001 vs. fit-TR and artificial turf) a lower in artificial turf than in fit-TR (p = 0.002).
Results
We excluded lab-TR and NM-TR data from the analyses, as they did not follow the premises of normal distribution and homogeneity of variances. The results for these treadmills are shown for information in the graphical analysis ( Figure 1).
When comparing the overall differences in the mechanical properties across the three types of surfaces (fit-TR, artificial turf, and track and field) we found a significant group (i.e., "type of surface") effect for SA, VD and ER (Table 2). In post hoc pairwise comparisons, SA was lower in track than in the other two surfaces (p < 0.001 vs. both fit-TR and artificial turf) and lower in artificial turf than in fit-TR (p = 0.001). VD was also lower in track than in the other two surfaces (p < 0.001 vs. fit-TR and artificial turf, respectively) and lower in fit-TR than in artificial turf (p < 0.001). By contrast, ER was higher in track than in the other two surfaces (p < 0.001 vs. fit-TR and artificial turf) and also lower in artificial turf than in fit-TR (p = 0.002). Data are mean (±) SD. Abbreviations: ER, energy restitution; SA, shock absorption; VD, vertical deformation. Symbols: Ŧ p < 0.05 vs. treadmill; * p < 0.05 vs. artificial turf. Of note, World Athletics states that the artificial athlete (AA) device should be used instead of the advanced artificial athlete (AAA) to assess the mechanical properties of track surfaces. The equivalence between both test apparatus has been previously described [22]. Thus, the above reported values for track surfaces (which were obtained using the AAA) would be equivalent to SA and VD values of ≈35.5% and ≈ 1.73 mm, respectively, when assessed with the AA. Table 3 shows the differences between the six fit-TR models, revealing a significant group effect for SA, VD and ER. The treadmill models of the brand Life Fitness (LF97T and LFDX) displayed higher Sensors 2020, 20, x FOR PEER REVIEW
Results
We excluded lab-TR and NM-TR data from the analyses, as they did not follow the pre normal distribution and homogeneity of variances. The results for these treadmills are sh information in the graphical analysis ( Figure 1).
When comparing the overall differences in the mechanical properties across the three surfaces (fit-TR, artificial turf, and track and field) we found a significant group (i.e., " surface") effect for SA, VD and ER (Table 2). In post hoc pairwise comparisons, SA was lower than in the other two surfaces (p < 0.001 vs. both fit-TR and artificial turf) and lower in artif than in fit-TR (p = 0.001). VD was also lower in track than in the other two surfaces (p < 0.00 TR and artificial turf, respectively) and lower in fit-TR than in artificial turf (p < 0.001). By ER was higher in track than in the other two surfaces (p < 0.001 vs. fit-TR and artificial turf) lower in artificial turf than in fit-TR (p = 0.002). Data are mean (±) SD. Abbreviations: ER, energy restitution; SA, shock absorption; VD, vertical defo Symbols: Ŧ p < 0.05 vs. treadmill; * p < 0.05 vs. artificial turf. Of note, World Athletics states that the athlete (AA) device should be used instead of the advanced artificial athlete (AAA) to assess the m properties of track surfaces. The equivalence between both test apparatus has been previously descr Thus, the above reported values for track surfaces (which were obtained using the AAA) would be e to SA and VD values of ≈35.5% and ≈ 1.73 mm, respectively, when assessed with the AA. Table 3 shows the differences between the six fit-TR models, revealing a significant grou for SA, VD and ER. The treadmill models of the brand Life Fitness (LF97T and LFDX) displaye
Discussion
Our results show differences between the mechanical properties of treadmill surfaces, artificial turf pitches and athletics tracks. Taken together, artificial turf surfaces comply with the international standards for both football [23] (SA, 55-70%; VD, 4-11 mm; ER, N/A) and rugby [27] (SA, 55-70%; VD, 5.5-11.0 mm; ER, 20-50%), and the track surfaces meet the criteria established by World Athletics when assessed with the AA [25] (SA, 35-50%; VD, 0.6-2.5 mm; ER: N/A). When compared to these surfaces, treadmills show statistically significant differences in all mechanical properties. Thus, treadmills have the highest SA ability of all the surfaces, while their VDs and ERs range between those of the artificial turf and the track, being much closer to the first. When compared to other surfaces such as asphalt or concrete-with SA values below 2%, and VDs and ERs close to 0 [7,28]these differences are even higher. This suggests that, despite having been conceived for running and walking, the mechanical behavior of treadmill surfaces differs remarkably from that of other surfaces used for similar purposes such as tracks or asphalt roads. By contrast, treadmill surfaces seem to better reproduce the mechanical properties of the artificial turf.
Our results are in line with those of previous studies reporting that treadmill surfaces are usually more compliant than overground running surfaces [13] and also with those reporting that treadmill surfaces overall have a less compliant-here indicated by a lower VD-and higher damping behavior-here indicated by a higher ER-than artificial turf surfaces [9,29]. However, our findings regarding the mechanical behavior of treadmills cannot be generalized since there are large differences between treadmill models, even within the same brand. Indeed, our results show significant differences between the treadmills commonly used in fitness centers (fit-TR) of up to 6%, 3.1 mm and 25% in SA, VD and ER, respectively. These findings suggest that fit-TR may not be considered as homogeneous surfaces in terms of mechanical properties and that each treadmill model When comparing the overall differences in the mechanical properties across the three types of surfaces (ft-TR, artifcial turf, and track and feld) we found a signifcant group (i.e., "type of surface") effect for SA, VD and ER (Table 2). In post hoc pairwise comparisons, SA was lower in track than in the other two surfaces (p < 0.001 vs. both ft-TR and artifcial turf) and lower in artifcial turf than in ft-TR (p = 0.001). VD was also lower in track than in the other two surfaces (p < 0.001 vs. ft-TR and artifcial turf, respectively) and lower in ft-TR than in artifcial turf (p < 0.001). By contrast, ER was higher in track than in the other two surfaces (p < 0.001 vs. ft-TR and artifcial turf) and also lower in artifcial turf than in ft-TR (p = 0.002). cluded lab-TR and NM-TR data from the analyses, as they did not follow the premises of tribution and homogeneity of variances. The results for these treadmills are shown for in the graphical analysis ( Figure 1). comparing the overall differences in the mechanical properties across the three types of it-TR, artificial turf, and track and field) we found a significant group (i.e., "type of ffect for SA, VD and ER (Table 2). In post hoc pairwise comparisons, SA was lower in track other two surfaces (p < 0.001 vs. both fit-TR and artificial turf) and lower in artificial turf R (p = 0.001). VD was also lower in track than in the other two surfaces (p < 0.001 vs. fitificial turf, respectively) and lower in fit-TR than in artificial turf (p < 0.001). By contrast, her in track than in the other two surfaces (p < 0.001 vs. fit-TR and artificial turf) and also tificial turf than in fit-TR (p = 0.002). an (±) SD. Abbreviations: ER, energy restitution; SA, shock absorption; VD, vertical deformation. < 0.05 vs. treadmill; * p < 0.05 vs. artificial turf. Of note, World Athletics states that the artificial device should be used instead of the advanced artificial athlete (AAA) to assess the mechanical f track surfaces. The equivalence between both test apparatus has been previously described [22]. ove reported values for track surfaces (which were obtained using the AAA) would be equivalent D values of ≈35.5% and ≈ 1.73 mm, respectively, when assessed with the AA.
p < 0.05 vs. treadmill; * p < 0.05 vs. artifcial turf. Of note, World Athletics states that the artifcial athlete (AA) device should be used instead of the advanced artifcial athlete (AAA) to assess the mechanical properties of track surfaces. The equivalence between both test apparatus has been previously described [22]. Thus, the above reported values for track surfaces (which were obtained using the AAA) would be equivalent to SA and VD values of ≈35.5% and ≈ 1.73 mm, respectively, when assessed with the AA.
Sensors 2020, 20, 3822 5 of 9 Table 3 shows the differences between the six ft-TR models, revealing a signifcant group effect for SA, VD and ER. The treadmill models of the brand Life Fitness (LF 97T and LF DX ) displayed higher values of SA, VD and ER compared to the other treadmills (p < 0.01 for all cases), while the Precor model (PRE 956I ) showed the lowest values of VD and ER (p < 0.05 for all cases), with no signifcant differences in SA compared to the Technogym models. Figure 1 shows the product-moment correlations between the mechanical properties of each surface, taking all of the ft-TR models as a single group. All the surfaces showed a strong positive correlation between the SA and VD, this association being slightly weaker for the ft-TR. As for the SA vs. ER and the VD vs. ER relationships, artifcial turf and track surfaces showed a strong negative correlation in both cases, whereas positive correlations (moderate and strong, respectively) were found for ft-TR.
Discussion
Our results show differences between the mechanical properties of treadmill surfaces, artifcial turf pitches and athletics tracks. Taken together, artifcial turf surfaces comply with the international standards for both football [23] (SA, 55-70%; VD, 4-11 mm; ER, N/A) and rugby [27] (SA, 55-70%; VD, 5.5-11.0 mm; ER, 20-50%), and the track surfaces meet the criteria established by World Athletics when assessed with the AA [25] (SA, 35-50%; VD, 0.6-2.5 mm; ER: N/A). When compared to these surfaces, treadmills show statistically signifcant differences in all mechanical properties. Thus, treadmills have the highest SA ability of all the surfaces, while their VDs and ERs range between those of the artifcial turf and the track, being much closer to the frst. When compared to other surfaces such as asphalt or concrete-with SA values below 2%, and VDs and ERs close to 0 [7,28]-these differences are even higher. This suggests that, despite having been conceived for running and walking, the mechanical behavior of treadmill surfaces differs remarkably from that of other surfaces used for similar purposes such as tracks or asphalt roads. By contrast, treadmill surfaces seem to better reproduce the mechanical properties of the artifcial turf.
Our results are in line with those of previous studies reporting that treadmill surfaces are usually more compliant than overground running surfaces [13] and also with those reporting that treadmill surfaces overall have a less compliant-here indicated by a lower VD-and higher damping behavior-here indicated by a higher ER-than artifcial turf surfaces [9,29]. However, our fndings regarding the mechanical behavior of treadmills cannot be generalized since there are large differences between treadmill models, even within the same brand. Indeed, our results show signifcant differences between the treadmills commonly used in ftness centers (ft-TR) of up to 6%, 3.1 mm and 25% in SA, VD and ER, respectively. These fndings suggest that ft-TR may not be considered as homogeneous surfaces in terms of mechanical properties and that each treadmill model should be tested individually in order to characterize its mechanical behavior. Moreover, our results suggest that differences may exist between treadmill brands, as previously suggested [30], although the small sample of brands and models included in this study precludes the ability to draw general conclusions.
While keeping in mind that lab-TR could not be included in the statistical analyses, our results suggest that differences across lab-TR could be even greater than those reported for ft-TR. In this regard, some studies have shown that differences in the mechanical properties of treadmill surfaces can affect the metabolic cost and ground reaction forces during running [18,31], and others have reported that the varying mechanical properties of the running surface may result in premature fatigue or undesirable challenge during a certain task [32,33]. Collectively, these fndings suggest that researchers, clinicians and athletes using a lab-TR for specifc purposes must carefully choose the model to be used, since this may affect the generalizability of clinical assessments or research performed on the treadmill, potentially leading to erroneous research fndings [3,13,18,31,34]. For example, our fndings imply that marked differences in mechanical properties between treadmill and overground surfaces could critically affect footwear studies using treadmills to assess the effects of running shoes on running economy and running biomechanics [35][36][37], since the optimal footwear on a treadmill may not necessarily be the optimal footwear on an overground surface. Therefore, researchers using treadmills to reproduce overground conditions in research or clinical settings should attempt to use a treadmill whose surface mimics as closely as possible the mechanical properties of the specifc overground surface, since the comparability between both conditions will vary depending on the treadmill platform [18]. We therefore encourage the persistent testing and reporting of the mechanical properties of the surfaces to allow reliable comparisons to be made in this context, especially in research that aims to investigate the relationship between treadmill and overground locomotion, or where there is the need to reproduce overground conditions for specifc purposes-e.g., to investigate the effects of footwear.
Our results show a greater dispersion of treadmills' mechanical properties compared to those of artifcial turf and track surfaces ( Figure 1). Our fndings on the relationship between SA, VD and ER in artifcial turf and track surfaces support previous studies reporting that an increased compliance (i.e., higher VD) in overground surfaces is associated with a reduction in the re-utilization of elastic energy (i.e., a lower ER) [38][39][40], which would lead to an increased metabolic cost and alterations in running kinematics. However, as opposed to overground surfaces, both SA and VD are directly proportional to ER in treadmills, meaning that treadmills with more shock-absorbing and compliant surfaces would increase energy return to the runners. This supports previous research pointing that the metabolic cost of running is greater for treadmills with stiffer running platforms [18,23], contrary to what is encountered overground [7]. Moreover, the fact that the ER of some lab-TR is drastically lower than that of track surfaces could also explain previous fndings reporting that the metabolic cost at low [32] and submaximal speeds (with controlled air resistance) [2] is signifcantly higher on a treadmill compared to that on track surfaces. The increase in the treadmill ER as VD increases will most likely be due to the materials and structural components forming their surfaces, which determine their viscoelastic (or damping) properties relevant during the unloading phase. The latter may have relevant implications in terms of muscle activity and injury risk, as well as in terms of performance outcomes and the reproducibility of kinematic patterns when comparing treadmill to overground locomotion. In this sense, it has been reported that stiffer surfaces lead to increased muscle activity [41] and that surfaces providing increased mechanical cushioning affect running kinematics [11]. Nevertheless, the implications for performance and injury risk of surfaces with comparable stiffnesses but different damping properties remain unclear.
Overall, the present fndings support the importance of regulating the mechanical properties of treadmill surfaces because (1) the mechanical properties of all sports surfaces are considered to be important determinants of performance and injury risk, and (2) our results indicate that the mechanical properties of treadmills vary across models and do not match those of other surfaces that are often used for walking and running. Moreover, since treadmills with very similar VD (which is an indicator of their stiffness) may differ strongly in SA and ER, our results also indicate that assessing and regulating only stiffness in treadmill surfaces may not suffice for fully characterizing their mechanical behavior. Similarly, relating research results to surface stiffness could potentially lead to misleading conclusions. Further research in this area may help manufacturers to design treadmills with surface properties that match those of specifc overground surfaces, or treadmills with surface properties specifcally designed to achieve certain purposes such as enhancing athletic performance or decreasing injury risk. Additionally, future research should assess whether mechanical properties of treadmill surfaces could correlate with other variables such as a treadmill's usage time, temperature or kilometers traveled, which is something that the present research failed to investigate due to a lack of data.
Conclusions
The mechanical properties (shock absorption, vertical deformation and energy restitution) of treadmill surfaces differ signifcantly from those of commonly used overground sport surfaces such as artifcial turf and athletics tracks. Our results also suggest that, unlike overground surfaces, treadmills with more shock-absorbing and compliant surfaces would be expected to increase energy return to the athletes. Moreover, our results show remarkable differences between different treadmill models, suggesting that treadmills will most likely vary in their comparability to overground surfaces depending on the mechanical properties of their platforms.
|
2020-07-15T13:05:57.787Z
|
2020-07-01T00:00:00.000
|
{
"year": 2020,
"sha1": "384dd759833ae2c3c521b6bb6f77dcfddd751e21",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1424-8220/20/14/3822/pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "7480e394d60b7f8372a4fb06d4794203923e25b8",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Computer Science",
"Medicine"
]
}
|
3821564
|
pes2o/s2orc
|
v3-fos-license
|
Implication of exercise interventions on sleep disturbance in patients with pancreatic cancer: a study protocol for a randomised controlled trial
Introduction and purpose Patients with pancreatic cancer (PC) have long been known to have high rates of depression. Depression in patients with PC can be linked to sleep disturbance. The American College of Sports Medicine notes that physical exercise is safe for most patients with cancer and physical inactivity should be avoided. However, clinical impacts of exercise interventions (EIs) on patients with PC have been poorly investigated. We aim to prospectively examine the effect of EIs on sleep disturbance in patients with PC using actigraphy, which is an objective measurement of motor activity and sleep. Methods and analysis This trial is a non-double blind randomised controlled trial. Standard therapy for each patient with PC will be allowed. When registering study subjects, a thorough assessment of the nutritional status and the daily physical activities performed will be undertaken individually for each participant. Study subjects will be randomly assigned into two groups: (1) the EI and standard therapy group or (2) the standard therapy group. In the EI and standard therapy group, physical activities equal to or higher than walking for 60 min/day will be strongly recommended. The primary outcome measure is the sleep-related variable using actigraphy (activity index) at 12 weeks. Ethics and dissemination The trial received approval from the Institutional Review Board at Hyogo College of Medicine (approval no. 2769). Final data will be publicly announced. A report releasing the study findings will be submitted for publication to an appropriate peer-reviewed journal. Trial registration number UMIN000029272; Pre-results.
Introduction and purpose Patients with pancreatic cancer (PC) have long been known to have high rates of depression. Depression in patients with PC can be linked to sleep disturbance. The American College of Sports Medicine notes that physical exercise is safe for most patients with cancer and physical inactivity should be avoided. However, clinical impacts of exercise interventions (EIs) on patients with PC have been poorly investigated. We aim to prospectively examine the effect of EIs on sleep disturbance in patients with PC using actigraphy, which is an objective measurement of motor activity and sleep. Methods and analysis This trial is a non-double blind randomised controlled trial. Standard therapy for each patient with PC will be allowed. When registering study subjects, a thorough assessment of the nutritional status and the daily physical activities performed will be undertaken individually for each participant. Study subjects will be randomly assigned into two groups: (1) the EI and standard therapy group or (2) the standard therapy group. In the EI and standard therapy group, physical activities equal to or higher than walking for 60 min/day will be strongly recommended. The primary outcome measure is the sleep-related variable using actigraphy (activity index) at 12 weeks. Ethics and dissemination The trial received approval from the Institutional Review Board at Hyogo College of Medicine (approval no. 2769). Final data will be publicly announced. A report releasing the study findings will be submitted for publication to an appropriate peer-reviewed journal. trial registration number UMIN000029272; Pre-results.
IntroductIon
Pancreatic cancer (PC) is an aggressive disease, representing the fourth cause of cancer-related deaths worldwide. [1][2][3][4] Majority of patients with PC have unresectable, locally advanced or metastatic disease at the time of diagnosis and the 5-year overall survival (OS) rate in patients with PC with advanced tumour status is extremely low. [1][2][3][4] The proportion of patients with PC who can proceed with curative intent (eg, surgery) is less than 20%. [1][2][3][4] Currently, there is no standard programme for screening patients at high risk of PC. [1][2][3][4] For more than a decade, gemcitabine has been the cornerstone for the treatment of patients with advanced PC, despite a small advantage in terms of OS. 5 6 On the other hand, patients with PC have long been known to have high rates of depression. 7 8 The aetiology of depression in patients with PC may be traced to more than the poor prognosis of PC and the pain it causes. 7 8 In addition, depression in patients with PC can be linked to sleep disturbance. 9 Appropriate symptomatic management is therefore critical for patients with PC.
Regular physical activity favourably influences the risk for disease onset and the progression of several malignancies. [10][11][12][13][14][15][16][17][18][19] Cancer survivors who exercise can potentially benefit from reduced levels of fatigue and improved quality of life (QOL) and physical function. 20 The American College of Sports Medicine notes that exercise is safe for most cancer survivors and physical inactivity should be avoided. 21 However, clinical impacts of exercise interventions (EIs) on patients with PC have been poorly investigated.
Decreased QOL in patients with PC can cause sleep disturbance, and poor sleep quality can further negatively influence QOL. Sleep disruptions have been extensively examined through the use of actigraphy, which is an objective measurement of motor activity and sleep. 22 Open Access disturbance often varies as a function of objective versus subjective evaluation. 26 Actigraphy is the most frequently used brand by investigators, and it is a non-invasive and cost-effective medical device used to assess the sleep quality compared with polysomnography, since it is the size of a wristwatch and can be worn without interfering daily activities. 22-25 27 Despite clinical benefits of EI on patients with cancer, there are limited data available with regard to patients with PC undergoing EI on sleep disturbance. There is therefore urgent need to examine this issue. In this study, we aim to prospectively examine the effect of EI on sleep disturbance in patients with PC using actigraphy.
Eligibility of study subjects
In patients with PC with poor nutritional status, EI may be accompanied with increased health risks, as EI may cause further protein catabolism and muscle mass decline. [28][29][30] When registering study subjects, a thorough assessment of the nutritional status and the daily physical activities performed will be done individually for each participant. For all potential study subjects, the researchers will explain in detail the study purposes, procedures and potential relevant benefits and risks of this trial in a written informed manner. The researchers must let every potential study subject know that they have the right to withdraw consent at any time throughout the study period. All potential study subjects must be given sufficient time for careful consideration prior to making decision. All study subjects must sign the consent before they can participate in the study. Written informed consents will be kept as a part of the clinical trial documents.
Inclusion criteria 1. Both sexes. 2. Patients with PC aged 20 years and more. A diagnosis of PC will be based on the current Japanese guidelines. 31 The severity for PC (clinical stage) will be determined based on Union for International Cancer Control classification system. 32
Patients with Eastern Cooperative Oncology Group
(ECOG) performance status (PS) 0 or 1.
Exclusion criteria 1. PC subjects with severe depression or psychiatric disorder such as those with high scores in patient health questionnaire. 2. PC subjects with far advanced tumour status with massive ascites that participation in this trial is anticipated to be difficult. 3. PC subjects with severe underlying diseases, such as severe infectious diseases, severe chronic heart failure and respiratory disorders. 4. Pregnant or lactating female patients with PC. 5. PC subjects who may be at a risk of falls. 6. PC subjects considered unsuitable for this trial due to the inability to participate in EI.
7. PC subjects considered unsuitable for this trial due to other reasons. study protocol study design: single-centre non-double blind randomised controlled trial Our study subjects are patients with PC. All clinical stages (stages I, II, III and IV) of PC can be considered for participation in this study. Standard therapy for each patient with PC will be allowed. Study subjects will be randomly assigned into two groups: (1) the EI and standard therapy group or (2) the standard therapy group (figure 1). Standard therapies such as surgery and systemic chemotherapy will be selected according to tumour status and baseline characteristics in each patient through discussion with surgeons and oncologists. 16 31 Adding new medicines for sleep disturbance during study period will not be allowed.
Exercise interventions
The declines in physical abilities and physiological function that are commonly seen in patients with cancer can be minimised or prevented with a well-thought-out exercise programme. 20 In the EI and standard therapy group, guidance for EI will be provided for each participant once a month at the outpatient nutritional guidance clinic. Participants will also be instructed to do exercises with ≥3 metabolic equivalents (mets; energy consumption in physical activities/resting metabolic rate) for 60 min/ day and to do exercises >23 mets/week. [10][11][12][13][14][15] In the EI and standard therapy group, physical activities equal to or higher than walking for 60 min/day will be strongly recommended for each study subject because insufficient patient education may contribute to the belief that exercise is not helpful. In both groups, standard therapies for PC will be permitted and we will ask all study subjects to self-declare their daily amount of exercise. Direct monitoring of EI will not be undertaken.
Evaluation using actigraphy
Actigraphy is a medical device for gathering objective sleep/awake data in the natural sleeping surroundings over an extended time period. [22][23][24][25] The study subjects will be advised to wear a wrist actigraphy on their non-dominant wrist over a period of 3 days based on the manufacturer's information. [22][23][24][25] Evaluation by actigraphy will be carried out at 4-week intervals. The follow-up period in each subject will be 12 months. At the same time points, data for laboratory testing, questionnaire, and clinical symptoms will be also gathered. Principally, study subjects will be advised to visit our hospital in an outpatient basis. Data in the actigraphy will be downloaded into a dedicated computer program. The following five sleep-related factors will be utilised for assessment as mentioned elsewhere: (1) sleep onset latency, (2) wake after sleep onset (defined as the minutes awake during the sleep period after the beginning of sleep (the first two continuous minutes scored as sleep)), (3) activity index (average amount of activity in sleep), (4) wake episodes (total number of wake counts between trying to start to sleep and wake-up times) and (5) sleep episodes in daytime (total number of sleep counts in daytime). 23 The increase in each score suggests the worse sleep quality. Activity index will be assessed as a primary outcome measure because it can well reflect sleep quality. 23 A representative case in actigraphy is presented in figure 2.
time to start EI In the EI group, when the general condition of study subjects is stable after the initial standard therapy and EI is judged to be enforceable safely by the attending physicians, EI will be initiated as soon as possible.
primary endpoints (confirmatory) 8. Sleep-related variable using actigraphy (activity index) at 12 weeks. Open Access secondary endpoints (exploratory) 1. Questionnaire survey Sleep rhythm and depressed state in daily life will be assessed using questionnaire surveys (the Beck Depression Inventory, Second Edition (BDI-II) 33 and Pittsburgh sleep quality index 34 ).
Changes over time in baseline characteristics
Changes over time to the following baseline parameters will be assessed: body weight, body mass index, white cell count, platelet count, serum albumin level, aspartate aminotransferase, alanine aminotransferase, total cholesterol, triglyceride, lowdensity lipoprotein, high-density lipoprotein, fasting blood glucose, haemoglobin A1c, homeostasis model assessment of insulin resistance and tumour markers.
Follow-up and standard of care
During the observation period and after completion of the trial, all study subjects will be seen in clinic every 4 weeks to address complications from PC and other comorbidities. In both groups, standard therapies for PC will be continued. Regular laboratory tests (haematology, biochemistry and coagulation) will be required at the trial entry and at the completion of this trial and on an as-needed basis.
case registration period From October 2017 to March 2021.
data collection
A study assistant will collect data elements from medical records in each patient, including: Baseline data: a. Sex and age. b. Height and body weight. c. Vital signs and ECOG-PS. d. History of alcohol consumption and history of smoking. e. Disease severity of PC (clinical stage). f. Previous treatment and medication. g. Comorbid conditions. h. Baseline laboratory tests. i. Presence or absence of ascites or distant metastases on radiologic findings.
statistical methods
Descriptive statistics Data will be subjected into JMP software (SAS Institute, Cary, North Carolina, USA), and all relevant data will be checked to confirm consistency. Data at each time point will be compared. Quantitative factors will be compared using a paired or an unpaired t-test. Categorical factors will be compared using Pearson χ 2 test or Fisher's exact test, as appropriate. We will perform statistical analyses on an intention-to-treat basis, by which all study subjects will be analysed in the group which they are assigned to. Multivariate analysis for the improvement of activity index in actigraphy will also be performed.
sample size estimation Based on results of our preceding study regarding actigraphy, supposing that the α error (type 1 error) is 0.05, the detection power (β) is 0.8, the difference in the two groups to be detected and measured using bioimpedance analysis (BIA) is 10 and the SD of outcome is 10, the number of required participants in both groups will be 17 (total of 34 participants) in order to randomly allocate one to one. 17 Randomisation will be performed using the clinical stage of PC as an allocation factor for matching baseline characteristics between the two groups. We anticipate that a number of participants may drop out of the study; therefore, a total of 40 participants will be necessary to confirm our hypothesis.
dIscussIon
Cancer therapies cause profound debilitation that leads to reduced physical function and impairs QOL. 20 EIs benefit patients with cancer. 35 A recent study reported that EI may have a potential favourable impact on tumour outcome by reducing insulin resistance. 36 The clinical significance of EI has recently gained considerable attention due to the multiple health benefits of EI. [10][11][12][13][14][15][16][17][18][19] In that sense, our current study protocol and relevant data may be worth reporting. To the best of our knowledge, this is the first prospective interventional clinical trial that will objectively assess the influence of EIs on sleep disturbance in patients with PC. From a clinical practice perspective, we highlight the potential safety risks of EIs in patients with PC with poor nutritional status or poor PS, because EI may risk promoting further protein catabolism and muscle mass loss. An appropriate nutritional assessment will be needed prior to starting EIs and patients with PC with PS 2 or more will be excluded.
One of the major strong points in our study is that this will be a RCT. We acknowledge one relevant study limitation; this study will be based solely on a Japanese population. Additional research in different ethnic populations will be required to further verify the efficacy of EI in sleep disturbance and to extrapolate our results to other ethnicities. However, if the clinical efficacy of EI for sleep disturbance in patients with PC is confirmed in this RCT, the information we provide may be beneficial to clinicians.
EthIcs And dIssEMInAtIon research ethics approval
Ethical approval for this trial was granted by the Institutional Review Board at Hyogo college of medicine (approval no.2769). The study protocol, informed consent form and other submitted documents were reviewed and approved. Throughout the trial period, Declaration of Helsinki will be strictly followed in order to guarantee the right of the study subjects. Trial registration number is UMIN000029272 (https:// upload. umin.
Open Access
ac. jp/); pre-results. No patient is registered at the submission of our manuscript.
confidentiality All study subjects data will be stored securely. All relevant documents will be locked up and preserved at the Department of Hepatobiliary and Pancreatic Disease, Department of Internal Medicine, Hyogo College of Medicine, Hyogo, Japan, in accordance with data protection procedures. For each study subject, all data collected during the study period will be identified by a serial number and a name acronym in the case report forms.
dissemination policy Final data will be publicly disseminated irrespective of the study results. Results will be presented at relevant conferences and submitted to an appropriate peer-reviewed journal following trial closure and analysis.
contributors KY designed the study and wrote the initial draft of the manuscript. HN and HE contributed to the analysis and interpretation of data and assisted in the preparation of the manuscript. NI, YI, AI, YY, YM, KH, CN, RT, TN, NA, YS, NIk, TT, HI and SN contributed to data collection and interpretation, and critically reviewd the manuscript.
Funding This research received no specific grant from any funding agency in the public, commercial or not-for-profit sectors.
competing interests None declared. open Access This is an Open Access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited and the use is non-commercial. See: http:// creativecommons. org/ licenses/ by-nc/ 4. 0/
|
2018-04-03T05:02:16.047Z
|
2018-02-01T00:00:00.000
|
{
"year": 2018,
"sha1": "11cb54f8c6faaddead9977c22e185a9d45f3ed6d",
"oa_license": "CCBYNC",
"oa_url": "https://bmjopengastro.bmj.com/content/bmjgast/5/1/e000196.full.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "11cb54f8c6faaddead9977c22e185a9d45f3ed6d",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
209379211
|
pes2o/s2orc
|
v3-fos-license
|
Mapping of CD4+ T-cell epitopes in bovine leukemia virus from five cattle with differential susceptibilities to bovine leukemia virus disease progression
Background Bovine leukemia virus (BLV), which is closely related to human T-cell leukemia virus, is the etiological agent of enzootic bovine leukosis, a disease characterized by a highly prolonged course involving persistent lymphocytosis and B-cell lymphoma. The bovine major histocompatibility complex class II region plays a key role in the subclinical progression of BLV infection. In this study, we aimed to evaluate the roles of CD4+ T-cell epitopes in disease progression in cattle. Methods We examined five Japanese Black cattle, including three disease-susceptible animals, one disease-resistant animal, and one normal animal, classified according to genotyping of bovine leukocyte antigen (BoLA)-DRB3 and BoLA-DQA1 alleles using polymerase chain reaction sequence-based typing methods. All cattle were inoculated with BLV-infected blood collected from BLV experimentally infected cattle and then subjected to CD4+ T-cell epitope mapping by cell proliferation assays. Results Five Japanese Black cattle were successfully infected with BLV, and CD4+ T-cell epitope mapping was then conducted. Disease-resistant and normal cattle showed low and moderate proviral loads and harbored six or five types of CD4+ T-cell epitopes, respectively. In contrast, the one of three disease-susceptible cattle with the highest proviral load did not harbor CD4+ T-cell epitopes, and two of three other cattle with high proviral loads each had only one epitope. Thus, the CD4+ T-cell epitope repertoire was less frequent in disease-susceptible cattle than in other cattle. Conclusion Although only a few cattle were included in this study, our results showed that CD4+ T-cell epitopes may be associated with BoLA-DRB3-DQA1 haplotypes, which conferred differential susceptibilities to BLV proviral loads. These CD4+ T-cell epitopes could be useful for the design of anti-BLV vaccines targeting disease-susceptible Japanese Black cattle. Further studies of CD4+ T-cell epitopes in other breeds and using larger numbers of cattle with differential susceptibilities are required to confirm these findings.
Background
Bovine leukemia virus (BLV) is closely related to human T-cell leukemia virus types 1 and 2, and is associated with enzootic bovine leukosis, a common neoplastic disease in cattle [1,2]. BLV infection can remain clinically silent, with cattle in a leukemic state, or can emerge as persistent lymphocytosis characterized by an increased number of B lymphocytes or rarely as B-cell lymphoma in various lymph nodes after a long period of latency [1,2].
BLV contains the structural genes gag, pol, and env and the two regulatory genes tax and rex. The gag gene encodes three mature proteins, i.e., p15 (matrix protein), p24 (an abundant capsid protein), and p12 (nucleocapsid protein). The tax gene encodes Tax protein, which activates the transcription of BLV through the 5′ long terminal repeats of BLV [1,3]. The BLV env gene encodes a mature surface glycoprotein (gp51) and a transmembrane protein (gp30). The gp51 protein is thought to be the major target of humoral immunity. Callebaut et al. [4] performed CD4 + T-cell epitope mapping of the gp51 protein and identified three epitopes: peptide 98-117, peptide 169-188, and peptide 177-192. Gatei et al. [5] also conducted epitope mapping in sheep, cows, and calves. They found two other gp51 CD4 + T-cell epitopes: peptide 51-70 and peptide 61-80. Mager et al. [6] performed a CD4 + T-cell proliferation assay using eight BLV-seropositive cows and found two epitopes in the p24 amino acid sequence: peptide 31-55 and peptide 141-165. Sakakibara et al. identified the Tcell epitopes Tax peptide 131-150 and Tax peptide 111-130, both of which contained epitopes recognized by Tcells from BALB/c and C57BL/6 mice, within the Tax protein [7]. However, to date, no Tax protein epitope mapping has been conducted in cattle. In fact, only two proteins, gp51 and p24, have been studied as CD4 + T-cell epitopes using the natural host of BLV.
BLV disease progression and proviral load are strongly related to major histocompatibility complex (MHC) class II alleles. The bovine MHC region is referred to as the bovine leukocyte antigen (BoLA) region [8,9]. The BoLA class II region is divided into two distinct subregions: class IIa and class IIb. Class IIa contains classical class II genes, including at least two DQA genes, two DQB genes, one functional DRB3 gene, and one DRA gene, and class IIb contains nonclassical class II genes. These class II genes encode proteins that are able to bind to the processed peptides and present the peptides to CD4 + T-cells. Class II molecules are formed by αand β-chains encoded by distinct genes within the MHC region. For example, the α1 and β1 domains form the peptide binding groove [10]. MHC genes are highly polymorphic; to date, 65 BoLA-DQA, 87 BoLA-DQB, and 303 BoLA-DRB3 alleles have been identified according to the BoLA Nomenclature Committee of the Immuno Polymorphism Database MHC database (http://www.ebi.ac.uk/ ipd/mhc/bola). Therefore, class II molecules encoding distinct alleles may exert different effects on responses of T-cells via binding to different peptides directly within the peptide binding groove of the various class II molecules. Indeed, BoLA-DRB3 polymorphisms are known to be associated with BLV-induced persistent lymphocytosis [11,12] and BLV proviral load [13][14][15]. Recently, Miyasaka et al. reported that the BoLA class II allele DRB3*1601 was associated with a high BLV proviral load in Japanese Black cattle and that DRB3*0902 and DRB3*1101 were associated with a low proviral load [16]. Additionally, BoLA-DQA1*0204 and BoLADQA1*10012 were reported to be associated with low and high proviral loads, respectively [16]. Therefore, it is a hypothesis that disease-susceptible cattle may have fewer epitopes than resistant cattle, resulting in weak immune responses. Although several groups have used mice, sheep, and cattle to try to identify BLV epitopes recognized by CD4 + and CD8 + T-cells and B cells [4,5,7,[17][18][19][20][21], none of these studies have evaluated the roles of MHC polymorphisms.
Accordingly, in this study, we aimed to examine the roles of these polymorphisms and to map CD4 + T-cell epitopes in a preliminary study in BLV-susceptible and -resistant cattle infected with BLV.
Experimental infection with BLV and collection of blood samples
Five 5-month-old Japanese Black cattle (S2, S4, S6, R1, and N1), each of which was genotyped for BoLA-DRB3 and -DQA1 alleles using a polymerase chain reaction (PCR) sequence-based typing (SBT) method [22,23], were experimentally challenged by intravenous injection of white blood cells obtained from BLV-seropositive Holstein-Friesian cattle ( Table 1). The inoculated blood had 4 × 10 7 copies of provirus, as estimated by BLV-CoCoMo-qPCR-2, a quantitative real-time PCR method that uses coordination of common motifs (CoCoMo) primers to measure the proviral loads of known and novel BLV variants in BLV-infected animals [24][25][26][27]. Blood samples were collected for approximately 5 months after the first inoculation, and DNA and serum samples were obtained.
The study was approved by the Animal Ethical Committee and the Animal Care and Use Committee of the Animal Research Center, Hokkaido Research Organization (approval number 1302).
BoLA-DQA1 alleles were genotyped using the PCR-SBT method as previously described [23]. Briefly, nested PCR was performed using the primer pair DQA1intL2 and DQA1-677R for the first round of amplification and the primer pair DQA1intL3 and DQA1ex2REV2.1 for the second round. After amplicon purification using an ExoSAP-IT PCR product purification kit (Affymetrix, Cleveland, OH, USA), sequence processing and data analysis were performed as described for BoLA-DRB3 typing.
Preparation of peripheral blood mononuclear cells (PBMCs) and CD4 + T lymphocytes
PBMCs were separated according to the method of Miyasaka and Trnka [28], and CD4 + T-cells were purified using the MACS System (Miltenyi Biotech, Inc., Auburn, CA, USA). Briefly, PBMCs were incubated with the ILA11A monoclonal antibody (mouse anti-bovine CD4; VMRD, Inc., Pullman, WA, USA) and captured with anti-mouse IgG monoclonal antibodies conjugated to magnetic beads. Magnetic bead-bound cells were then separated on a MACS LS column (Miltenyi Biotech, Inc.). The purity of CD4 + T-cells was 85-89%.
Synthetic peptides
A series of 20-mer peptides, each overlapping by 10 amino acids, was synthesized based on the reported sequences of BLV Gag (GenBank accession no. LC057268), Env (Gen-Bank accession no. EF600696), and Tax (GenBank accession no. EF600696) proteins and purified using highperformance liquid chromatography to greater than 70% purity (Sigma, St. Louis, MO, USA). The peptides were then resuspended in 80% dimethyl sulfoxide (DMSO) to form stock solutions (2 mM), separated into aliquots, and stored at − 20°C.
Proliferation assay
Antigen-presenting cells (APCs) were prepared by treating PBMCs with 50 μg/mL mitomycin C (Sigma-Aldrich, St. Louis, MO, USA) in RPMI 1640 for 60 min at 37°C. After washing five times in phosphate-buffered saline, cells were resuspended in RPMI 1640 and used as APCs. APCs (8 × 10 6 cells/mL) and CD4 + T-cells (2 × 10 6 cells/ mL) were co-incubated in flat-bottomed 96-well microplates (Sigma-Aldrich, Trasadingen, Switzerland) in the presence of either 20 μM peptide or 0.8% DMSO (negative control) in a total volume of 110 μL in cell medium. The microplates were then incubated in a 5% CO 2 humidified atmosphere at 37°C. After 109 h of incubation, 10 μL Cell Counting Kit-8 solution (Dojindo Molecular Technologies, Kumamoto, Japan) was added to each well, and the microplates were incubated for an additional 4 h under the same conditions. The microplates were then read at an optical density of 450 nm. All test conditions were set up in triplicate. The measured absorbance was compared with that of control wells incubated without peptides, and the stimulation index (SI) was calculated using the following equation:
Detection of anti-BLV antibodies in serum samples
An anti-BLV antibody enzyme-linked immunosorbent assay kit was used to detect antibodies according to the
Statistical analysis
The SI data were analyzed using F-tests and t-tests with the function program in Microsoft Excel. Results with p values of less than 0.01 were considered statistically significant.
Results
Genotyping of BoLA class II haplotypes and experimental challenge of five Japanese black cattle with BLV BoLA class II genotypes are major regulators of BLVinduced persistent lymphocytosis progression and the dynamics of the provirus in the blood [11-14, 16, 32]. Although the MHC class II genotype is the most important factor that determines CD4 + T-cell epitopes, no studies have combined genotyping of BoLA alleles with epitope mapping. Here, we evaluated five BoLA class II-genotyped Japanese Black cattle (Table 1). Three (S2, S4, and S6) of five cattle were disease-susceptible cattle with the BoLA class II genotype which is associated with a high proviral load [16]. Two of these three cattle were homozygous for DRB3*1601 and BoLA-DQA1*10012, which are associated with a high proviral load [16], and one was homozygous for DRB3*1601 and heterozygous for BoLA-DQA1*10012. In contrast, the resistant animal (R1) carried the BoLA-DQA1*0204 allele, which is related to a low proviral load [16], and the normal animal (N1) did not harbor the known BoLA-DRB3 or BoLA-DQA1 alleles, which are associated with BLV proviral load. BLV provirus levels were markedly higher in all three susceptible cattle (S2, S4, and S6); however, levels were low and moderate in one resistant animal (R1) and one normal animal (N1), respectively (Table 1). These five cattle were experimentally infected with BLV and then used for CD4 + T-cell epitope mapping experiments.
Proliferation of CD4 + T-cells isolated from BLV-infected cattle
The synthesized peptides were grouped into 23 pools, each containing five peptides at a final concentration of 20 μM per peptide. At the first screening, CD4 + T-cells isolated from the five cattle were stimulated with each peptide pool, and proliferation was measured. No peptide pools significantly induced the proliferation of CD4 + T-cells from the susceptible animal S6 (p < 0.01). Peptide pools 9, 11, and 14 induced significantly high levels of proliferation in CD4 + T-cells from S2; pool 21 induced significantly high level of proliferation in cells from S4; pools 9 and 21 induced high levels of proliferation in cells from N1; and pools 21 and 22 induced high levels of proliferation in cells from R1 (Fig. 1).
To further map the epitopes recognized by CD4 + T-cells from the five BLV-infected cattle, proliferative responses in the presence of peptide within the positive peptide pools were examined in proliferation assays. The peptides gp51N11 and tax17 induced particularly high levels of proliferation in CD4 + T-cells from S2 and S4, respectively. Five peptides (i.e., gp30N5, gp30N6, gp30N7, tax16, and tax19) induced high proliferation of CD4 + T-cells from N1, and six peptides (i.e., tax17, tax19, tax20, tax22, tax23, and tax24) induced high proliferation of CD4 + cells from R1 (Fig. 2).
Overview of the positions of CD4 + T-cell epitopes identified in this study
In this study, we identified 11 types of 20-mer peptides that induced the proliferation of CD4 + T-cells collected from four of five BLV-infected cattle (Fig. 3). The number of CD4 + T-cell epitopes was positively related to proviral load, which dependent on the MHC class II genotype.
We identified a common epitope, gp30N6, recognized by CD4 + T cells from the normal animal (N1); this epitope corresponded to a putative immunosuppressive domain that affects the fusion activity of BLV in vitro [33] (Fig. 3). Moreover, gp30N5 and gp30N7, which were located on either side of gp30N6, were also recognized as CD4 + T-cell epitopes in N1. Although many tax peptides showed high SI values, these peptides were not identified as CD4 + T-cell epitopes because of the high standard errors observed during peptide screening (Fig. 2). The SI average of peptides from pool 21 tended to be high. Four peptides, i.e., tax20, tax22, tax23, and tax24, only induced proliferation in R1 and showed low proviral loads. In addition, N1 also had two peptides, i.e., tax16 and tax19, which were identified as CD4 + T-cell epitopes. Therefore, the tax extracellular domain was considered a common CD4 + T-cell epitope in this study.
Although few cattle were examined in this study, we found strong evidence that the genetic background may affect the selection of proteins as immune targets for CD4 + T cell-associated immune responses. Further studies using experimental infection should be performed to confirm our results.
Discussion
In this study, we screened 115 synthetic peptides encompassing the Gag proteins (p15, p24, and p12), Env proteins (gp51 and gp30), and Tax proteins of BLV. From this preliminary study, we identified 11 epitopes recognized by CD4 + T-cells isolated from five cattle (S2, S4, S6, R1, and N1) showing differing susceptibilities to BLV according to BoLA class II haplotypes. This is the first study to use MHC class II-genotyped cattle to map CD4 + T-cell epitopes in BLV, and our result showed that CD4 + T-cell epitopes derived from disease-susceptible cattle harboring the BoLA-DRB3*1601 homozygous genotype (n = 3) were fewer in number than those in resistant (n = 1) and normal cattle (n = 1). The BoLA-DRB3 gene regulates both antigen epitope recognition and the magnitude of antigenspecific T-cell responses mounted upon exposure to infection [8,9]. Similarly, Nagaoka et al. [34] also showed the weak reactivity for BLV peptide vaccination in BLVsusceptible sheep and found that susceptible sheep developed BLV-induced lymphoma after challenge by BLV. These results suggested that immune responses contributed to individual differences in CD4 + T-cell epitopes owing to MHC class II polymorphisms.
Three BLV peptides, i.e., Env 98-117 [4], Env 51-70, and Env 61-80 [5], are known CD4 + T-cell epitopes. Here, we identified one CD4 + T-cell epitope within the gp51 protein, namely, gp51N11, and showed that 17 of the 20 amino acid sequences of gp51N11 were identical to Env 98-117. Peptide pool 14, which contained gp51N11, showed a relatively high SI, indicating that this region contained epitopes recognized by CD4 + T-cells. Sakakibara et al. identified T-cell Fig. 1 CD4 + T-cell proliferative responses to 23 peptide pools. PBMCs were obtained from five BLV-infected cattle (S2, S4, S6, R1, and N1). CD4 + Tcells were then isolated and used as effector cells. PBMCs were pre-treated with mitomycin C (4 × 10 5 /50 μl; 50 μg/ml) for 1 h at 37°C and then co-incubated with CD4 + T-cells (1 × 10 5 /50 μl) and different peptide pools (each pool contained five different peptides, each at 20 μM) for 113 h at 37°C. Cell Counting Kit-8 was used to measure CD4 + T-cell proliferation. The absorbance of the test wells was compared with that of control wells that did not contain peptides. The Stimulation Index (SI) was calculated as follows: Stimulation Index ðSIÞ ¼ ½PBMC;CD4;peptide−½Medium only ½PBMC;CD4;DMSO−½Medium only . The bars represent the mean ± standard deviation (SD) of triplicate wells. Asterisk and shade box bar mean the pool showed significantly higher value than DMSO (negative control) well (p < 0.01) epitopes within the Tax protein [7], i.e., peptide 131-150 (IGHGLLPWNNLVTHPVLGKV) and peptide 111-130 (SPFQPYQCQLPSASSDGC), which contained epitopes recognized by T-cells from BALB/c and C57BL/6 mice, respectively. These regions corresponded to tax11 and tax14, neither of which were identified as epitopes in the current study. These findings suggested that CD4 + Tcell epitopes are different in mice and cattle. Interestingly, tax17, tax19, tax20, and tax22-24 (detected in R1 in our study) corresponded to a leucine-rich region (tax157-197) that may be involved in heterologous protein interactions [35]. According to a previous study [16], the resistance alleles BoLA-DRB3 and BoLA-DQA are commonly observed in Japanese Black and Holstein cattle, whereas susceptible alleles differed. Although there was only one resistant animal, more epitopes from Tax protein were identified in resistant cattle than in other cattle, suggesting that CD4 + T-cell epitopes (Tax22-24) from Tax protein may induce strong immune responses. Additional studies with more cattle are required to further confirm these findings. Fig. 2 CD4 + T-cell proliferative responses to individual peptides within positive peptide pools. CD4 + T-cells (effector cells; 1 × 10 5 cells/50 μl) from four BLV-infected cattle (S2, S4, R1, and N1) were co-incubated with mitomycin C-treated PBMCs (APCs; 4 × 10 5 cells/50 μl) and incubated with either 80% DMSO (negative control) or peptide from pools 9, 11 and 14 (for S2), pool 21 (for S4), pools 20 and 21 (for R1), and pools 9 and 21 (for N1), all at a final concentration of 20 μM. The cells were incubated with peptide for 113 h at 37°C and CD4 + T-cell proliferation was examined using Cell Counting Kit-8. The absorbance of the test wells was compared with that of control wells incubated without peptide and the Stimulation Index (SI) was calculated as follows: Stimulation Index ðSIÞ ¼ ½PBMC;CD4;peptide−½Medium only ½PBMC;CD4;DMSO−½Medium only . The bars represent the mean ± standard deviation (SD) of triplicate wells. Asterisk and shade box bar mean the peptide showed significantly higher value than DMSO (negative control) well (p < 0.01)
Conclusion
We successfully identified 11 BLV epitopes recognized by CD4 + T-cells from four of five cattle, including four types of BoLA class II haplotypes. Among CD4 + T-cell epitopes related to the MHC class II genotype, fewer CD4 + T-cell epitopes were observed in susceptible cattle than in resistant and normal cattle. Although few samples were evaluated, the result showed that antigens were restricted according to BoLA class II haplotype, indicating that genotyping is important for determining antigenic epitopes recognized by the host immune response.
|
2019-12-17T14:29:33.203Z
|
2019-12-01T00:00:00.000
|
{
"year": 2019,
"sha1": "6b623f06ed47ef961add827b346925e5e144b0a1",
"oa_license": "CCBY",
"oa_url": "https://virologyj.biomedcentral.com/track/pdf/10.1186/s12985-019-1259-9",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "8f8ebd65a0ea4ca9c1195b68749036960ea8281a",
"s2fieldsofstudy": [
"Agricultural And Food Sciences",
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
3534522
|
pes2o/s2orc
|
v3-fos-license
|
The CIPM list of recommended frequency standard values: guidelines and procedures
A list of standard reference frequency values (LoF) of quantum transitions from the microwave to the optical regime has been recommended by the International Committee for Weights and Measures (Comité international des poids et mesures, CIPM) for use in basic research, technology, and for the metrology of time, frequency and length. The CIPM LoF contains entries that are recommended as secondary representations of the second in the International System of Units, and entries that can be used to serve as realizations of the definition of the metre. The historical perspective that led to the CIPM LoF is outlined. Procedures have been developed for updating existing, and validating new, entries into the CIPM LoF. The CIPM LoF might serve as an entry for a future redefinition of the second by an optical transition.
Introduction and historical perspective
Since the redefinition of the unit of length in the International System of Units (SI) [1] by the 17th General Conference of Weights and Measures (Conférence générale des poids et mesures, CGPM) in 1983 [2] the metre has been defined via the adopted value of the speed of light in a vacuum c 0 = 299 792 458 m s −1 . The fixed numerical value for the speed of light c 0 = λ · ν links the vacuum wavelength λ and the frequency ν of any plane electromagnetic wave. Consequently, each radiation whose frequency can be traced back to the primary standard of time and frequency, i.e. the caesium atomic clock, represents at the same time a unified standard of frequency, time and length.
In parallel with the redefinition of the metre, the 17th CGPM invited the International Committee for Weights and Measures (Comité international des poids et mesures, CIPM) to draw up instructions for the practical realization of the new definition of the metre, and to choose radiations which can be recommended as wavelength standards for the interferometric measurement of length and to itemise operating procedures for their use, and finally to pursue studies to improve these standards. These recommendations for the practical realization of the definition were generally referred to as the mise en pratique of the definition. In turn, the CIPM recommended that the metre be realized by one of the following methods: (c) by means of one of the radiations from the list of recommended radiations [2], whose stated wavelength in a vacuum, or whose stated frequency, can be used with the uncertainty shown, provided that the given specifications and accepted good practice are followed.
The CIPM also recommended that, in all cases, any necessary corrections should be applied in order to take account of actual conditions such as diffraction, gravitation, or imperfection in the vacuum.
These three methods are essentially only two: a time of flight method and an interferometric method. The latter method uses a radiation of known vacuum wavelength that can be related to the SI frequency of the plane wave used in interferometry, either by a direct measurement or by reference to one of the recommended vacuum wavelengths of validated light sources. The mise en pratique for the definition of the metre was updated on several occasions by the Consultative Committee for Length (CCL) and its Mise en Pratique Working Group (MePWG) [3][4][5][6] thereby progressively improving the realization of the definition of the metre (figure 1). For practical length measurements, soon the uncertainty due to the realization of the length unit by optical wavelength/frequency standards became negligible: the practical measurement of the length of a gauge block in an interferometer is limited to about 10 −8 [7][8][9], mostly determined by the properties of the artefact itself and the refractive index of air. Even for interferometric displacement measurements the diffraction correction will place a technical limit. As an example, consider the diffraction correction for a Gaussian beam of waist w 0 = 0.1 m and a wavelength of 500 nm which would amount to 6 × 10 −13 [10].
The use of laser cooling of absorbers [11,12], improved frequency stabilisation techniques, the development of phase coherent frequency measurement chains [13] and later the invention of femtosecond frequency combs [14] had, furthermore, two important consequences: firstly, the broad availability of femtosecond frequency combs allows each laboratory that has access to such a device and a primary caesium clock to directly measure the frequency of any desired laser. Hence, the laser standards in the mise en pratique (method (c)) lose, to some extent, their importance by virtue of the direct realization by method (b). Secondly, the most advanced frequency standards in the mise en pratique had acquired low uncertainties that were orders of magnitude better than the uncertainties that could be made use of in length metrology. As a result, they became more interesting for other fields apart from length metrology, e.g. in basic research [15], ultra-high precision spectroscopy [16] or for optical atomic clocks. Consequently, in 2001 the mise en pratique was renamed 'Practical realization of the definition of the metre, including recommended radiations of other optical frequency standards (2001)' [5].
In general, it was expected that such optical frequency standards and other microwave frequency standards would demonstrate reproducibility and stability approaching that of primary caesium. It was considered that these systems could be used to realize the second, provided their accuracy was close to that of caesium, but accepting that their uncertainty could obviously be no better than the caesium uncertainty while the latter remained the primary frequency standard. Today, the most advanced optical frequency standards have evolved to optical clocks that outperform the best microwave clocks with respect to their uncertainty (figure 2) and instability.
Additionally, and most importantly, the femtosecond optical frequency comb offered solutions to the longstanding problem of a convenient and accurate clockwork that linked the optical and microwave regions and allowed for frequency comparisons between optical frequency standards with very different frequencies.
Consequently, in 2001 the Consultative Committee for Time and Frequency (CCTF) took note of the continuation of the caesium 133 definition of the second, but recognised that there were new atoms and ions being studied as potential optical frequency standards, facilitated by new opticalfrequency measurement concepts that could allow the use of Evolution of the fractional uncertainty in realizing the unperturbed line centre in order to determine the unperturbed quantum transition frequency of primary atomic caesium clocks (squares) and of optical frequency standards (dots). Red dots show the fractional uncertainties of optical frequency standards directly related to the caesium atomic clock, green dots refer to published estimated standard uncertainties to realize the unperturbed line centre.
optical transitions as practical frequency standards offering direct microwave outputs from such standards. One of these standards could provide the basis for a future definition of the second, and the CCTF focused on the desirability of reviewing accurate frequency measurements of such atom and ion transition frequencies made relative to the caesium frequency standard. As a result, the 'Recommendation CCTF 1 (2001)' [17] promoted the establishment of a list of 'secondary representations of the second' (SRS) where the documentation of uncertainty that applied to these SRS would be the same as those for primary caesium standards used to contribute to international atomic time (TAI).
Furthermore, the establishment of the set of SRS had significant implications for the list 'Practical realization of the definition of the metre, including recommended radiations of other optical frequency standards (2001)', formerly the mise en pratique. In order to avoid ambiguity in respect of radiations appearing on both lists with potentially differing levels of stated uncertainty, it was considered essential that the values for mise en pratique and SRS radiations be combined in a single list, where the CCTF would ratify new and existing radiations to be accepted as SRS, and the CCL would recommend new and existing radiations for realization of the definition of the metre. Subsequently, following the wishes of the CIPM, a Joint Working Group (JWG) of the CCL/CCTF was set up in September 2003 with experts from the CCL and CCTF, taking note of convergence of interests in work, to consider the criteria for adoption of a radiation as an SRS. The JWG-later renamed as the CCL-CCTF Frequency Standards Working Group (WGFS)recommended in 2003 that the requirements should include a peer-reviewed uncertainty budget for the frequency of the radiation, and that the total uncertainty of the value should be no more than one order of magnitude larger than the best realizations of the primary frequency standards of that date [18]. In 2004 the CCTF (in 'Recommendation CCTF 1 (2004)' [19]) recommended using the rubidium-87 unperturbed ground-state hyperfine quantum transition frequency (6.8 GHz) as an SRS.
As a result of these deliberations, the CIPM concluded in its 'Recommendation In 2007, the CCL recommended [21] to the CIPM an updated list of frequency values for the 12 C 2 H 2 (ν1 + v3) band at 1.54 µm, the addition of frequency values for the 12 C 2 HD (2ν1) band at 1.54 µm, and the addition of frequency values for the hyperfine components of the P(142) 37-0, R(121) 35-0 and R(85) 33-0 iodine transitions at 532 nm, which were adopted by the CIPM as 'Recommendation 1 (CI-2007)' [22]. At the same meetings it was decided that an entry for unstabilized He-Ne lasers, operating on the 633 nm (3s 2 → 2p 4 ) neon transition, be included in the list of standard frequencies ('Recommendation 2 (CI-2007)') [23] and that an accompanying paper with CCL authority be published [24].
In 2009, the CCTF and the CCL proposed updates to certain frequency values in the CIPM LoF. Three further radiations were included in the list for the first time. These were the 88 Sr transition at 429 THz, the 40 Ca + quadrupole transition at 411 THz and the 518 THz clock transition in 171 Yb. These updates were recommended by the CIPM the same year [25].
At the 2015 update a paradigm shift became necessary as a result of two developments. Firstly, a number of optical frequency standards demonstrated smaller fractional projected uncertainties than the best caesium atomic clocks, and secondly, with the optical frequency comb technique, ratios of two optical frequencies could be measured with uncertainties that supported the uncertainties of the best optical clocks. It has been shown that the relative frequency uncertainty of the optical and microwave outputs of a femtosecond laser frequency comb can be as low as 8 × 10 −20 and 1.7 × 10 −17 , respectively [33,34]. As a result of these developments, an increasing number of direct optical frequency ratios had been measured with uncertainties that were much lower than those of direct frequency measurements against the caesium atomic clock as the primary realization of the definition of the second. These measurements included 27 Al + / 199 Hg + [35], 40 Ca + / 87 Sr [36], 171 Yb + (E3)/ 171 Yb + (E2) [37], 199 Hg/ 87 Sr [38], and 171 Yb/ 87 Sr [39]. The Al + /Hg + frequency ratio had already been used before to determine a new recommended value for the frequency of an optical frequency standard and a SRS [26]. With the combination of direct measurements against the caesium clocks and the optical frequency ratios, the whole body of frequency data represented at that time an overdetermined set of data. Margolis and Gill [40] proposed a method to determine the best values from such a set and their method was applied for the first time for the CIPM LoF in 2015. Robertsson developed an alternative method based on a graph theory framework for closed loops [41]. The different approaches have been tested on the relevant levels to give the same results [42]. The application of the new procedure will be discussed in more detail in the next section.
In the meantime, more frequency ratios have been determined. In 2017 the CIPM decided to leave the responsibility for the recommendations to the CCTF and CCL depending on whether the particular entry is for SRS and other time and frequency applications, or for practical realizations of the metre, respectively [43]. To this end the WGFS sends proposals to the respective consultative committee (CC) which will then inform the other CC on its decision. In 2017 a new evaluation by the CCTF took place [32].
The CIPM LoF now itemised within this publication is fully up to date with the 2017 values as ratified by the CCTF and will be fully accessible from the BIPM [44] which will be the only relevant repository for all future recommended values. This repository also contains the source data file with all the entries that led to the recommendation and the information about the applied procedure.
Properties of the CIPM LoF
The CIPM LoF at present already contains a large number of frequencies for different applications (figure 3). As discussed, some of those with the lowest uncertainties are used as SRS. A small group of four entries were recommended by the CCL as wavelength standards to realize the metre in interferometric length measurements (see table 1). Others find applications in current technology, e.g. in optical telecommunications [45]. Accurate frequency values are needed in basic science or the determination of fundamental constants [46]. The CIPM LoF is ordered according to frequency. The values of the frequency f and of the vacuum wavelength λ should be related exactly by the relation λ · f = c 0 with c 0 = 299 792 458 m s −1 , but the values of λ are rounded.
Following a decision by the CIPM, the CIPM LoF is conceptually divided into two parts. The first part ('active list') includes radiations of high accuracy that are of use in the realization of optical frequencies and vacuum wavelengths. The second part of the list ('frozen list') includes radiations that are still deemed useful for various applications but may have larger uncertainties and which will in general have no future updates of their value. The webpage of the BIPM currently does not discriminate between standard frequency values belonging to the first or the second part of the CIPM LoF.
Each of the listed radiations can be replaced, without degrading the accuracy, by a radiation corresponding to another component of the same transition or by another radiation, when the frequency difference is known with sufficient accuracy. In some cases, e.g. iodine stabilized lasers or acetylene stabilized lasers, such frequency intervals between transitions and hyperfine components have been validated and recommended by the CIPM also. They are also given in the source data files [44].
Many of the recommended radiations refer to the frequency of the unperturbed transition. When using these recommended frequencies it is left to the user to ensure the necessary corrections for their particular cases are applied. Some of the recommended radiations of stabilized lasers on this list specify the use of specific laser technology i.e. 'HeNe/I 2 at 473 THz' or specific operational conditions such as the intracavity power or the pressure in the absorption cell. Other laser technology may be used providing that the realization is calibrated by method (b).
One issue arising in respect of the future evolution of the CIPM LoF is the identification of criteria for inclusion of frequency values within the list. Given the powerful capability of femtosecond combs to compare optical frequencies at the 10 −20 level, there is potentially a wide range of atomic reference transitions that could be included. However, it is not considered desirable to proliferate the number of different entries within the list, and one general consideration is to examine the nature, usefulness and application of a prospective addition with respect to its metrological application. Thus, criteria might include the achieved level of uncertainty relative to the intended application. Relevant metrological applications include those in time and frequency, length and dimensional metrology, optical communication standards and applications in science and fundamental constants. Furthermore, when some radiations already included in the list are considered unlikely to find any metrological application going forward, the precedent has already been established for the radiation to be moved from the 'active' list, to the 'frozen' list. It is anticipated that no further update in the frequency values within this 'frozen' list will be warranted, either on account of their relatively high uncertainty or their lack of application. However, it remains perfectly acceptable to make use of these values for specific applications where no user alternative is readily available, such as the use of spectral lamps for gauge block calibration within industry, or where the accuracy required is sufficiently low, such as those applications where the use of an unstabilised 633 nm He-Ne laser is appropriate.
Additionally, it remains open to the WGFS to recommend, after careful deliberation, deletion from the list in certain cases where no purpose continues to be served by that radiation.
Frequency standards commonly used for the realization of the definition of the metre by interferometry
The former list of recommended radiations originally contained five radiations of lasers stabilized to molecular absorption lines together with radiations of spectral lamps [2]. Subsequently, the number of radiations in this list increased and many of them were never used for practical length measurements (even if they could have been). When the CIPM LoF was established in 2005, the 'Recommendation CCL2 (2005)' [47] proposed 'that the CCL may wish to select those frequencies which it considers important to highlight for use in high accuracy length metrology'. At the WGFS meeting on the 10-11 September 2007 the wavelengths at 633 nm, 543 nm and 532 nm were at this stage chosen as commonly used wavelengths (see table 1) but the meeting agreed to seek advice from the Working Group on Dimensional Metrology (WGDM) on this selection 5 .
In 2007, following a proposition from the CCL ('CCL13 (2007)' [49]) the CIPM recommended the unstabilised He-Ne laser at 633 nm for use in dimensional metrology. A more detailed guide relating to the use of 633 nm unstabilised lasers has been published subsequently [24].
In 2015 the CIPM-at the request of the CCL [50]-adopted the updates to the CIPM LoF [51], to include the 87 Rb d/f crossover saturated absorption D2 line at 780 nm [52,53] and the 531.5 nm saturated absorption a 1 transition in molecular 127 I 2 .
A recent detailed review about the transfer of the SI unit metre from the definition to practical length metrology can be found in [54].
Frequency standards recommended as SRS
As can be seen from figure 2 the estimated uncertainties obtained in realizing the unperturbed line centre of a transition are much lower for various atoms than the uncertainty that can be realized by the best atomic clocks based on the caesium hyperfine ground state. The lowest estimated uncertainties in the 10 −18 range have been reported for the 87 Sr optical lattice clock [55,56], the 171 Yb + single-ion clock [57] or the 27 Al + quantum logic clock [58].
One has to discriminate carefully between these estimated uncertainties to realize the true line centre of the unperturbed transition and known frequencies in the SI. In the CIPM LoF following the recommendation of the CCTF in 2017, there are now one microwave transition (hyperfine transition in 87 Rb) and eight optical frequency standards that are recommended as SRS (table 2) with estimated uncertainties as low as 4 × 10 −16 . This uncertainty is only a factor of about two larger than the uncertainties of the best primary caesium atomic clocks. In recent years, the 87 Rb fountain clock at LNE-SYRTE has regularly contributed to TAI as can be seen from the time bulletin 'Circular T' [59] and from [60]. It has been shown that TAI could benefit well from optical clocks [61][62][63]. First attempts have also been made to include 87 Sr optical lattice clocks.
Guidelines for inclusion in the CIPM LoF and statement of associated uncertainty
Given the substantial rate of progress in frequency metrology and the rapid output of new measured frequency values made possible by femtosecond frequency combs, the WGFS has developed criteria and procedures for the inclusion of a new or updated frequency value in the CIPM LoF. These are, to a large extent, based on analysis from the previous CCL-CCTF JWG and CCL MePWG, but also incorporate criteria already adopted for the inclusion of primary frequency standards in TAI [64] in the case of those radiations under consideration as SRS.
For each new evaluation-typically at intervals dictated by the official meetings of the CCL and of the CCTF-the WGFS summarizes the development and measurements all over the world to be used either for considering updates of already recommended frequencies, or possibly to be introduced as new recommended frequencies. For any such value to be included, the WGFS considers only the data that have been published in peer-reviewed, international, scientific journals. It then makes a thorough assessment of the value, and estimates an uncertainty, in which the uncertainty published in that journal is an important, but not the only, contrib ution. The WGFS applies a Bayesian approach to make use of all available information to estimate the uncertainty of each recommendation. Such additional information can result from a variety of sources. A few examples can illustrate this. Sometimes, authors apply corrections to their measurements, e.g. based on the measurement of others or on theoretical data without uncertainties. The working group considers these data and sometimes feels the need to increase these partial uncertainties which will affect the total uncertainty. Sometimes, authors reference their values to particular environmental conditions, e.g. at room temperature. In this case corrections have to be applied to relate the measured frequency to an environment free of perturbations, which can subsequently increase the former stated uncertainty. The use of the same theoretical or experimental sensitivity coefficient for an applied correction to the measurements of different origin leads to a correlation effect which tends to reduce the uncertainty if not correctly taken into account. Only a few institutes have at their disposal primary caesium atomic clocks that can realize the second with an uncertainty in the low 10 −16 regime. Others rely for their measurement on the SI second as provided by the international atomic time scale Coordinated Universal Time (UTC) or TAI, to which the institutes with Cs fountains contribute, and which is also a source of correlations. Furthermore, additional information is sometimes obtained only after publication of the frequency values, which can then affect the published uncertainties.
In several cases, for a particular radiation, very few frequency values-or even just one value-may be under consideration. Here one could make some statistical assumptions and give a formal estimation, but since only a small amount of information is available in a single measurement this leads most certainly to a low predictability. However, it has been shown that an alternative approach that combines the Working Group expertise, together with empirical rules for the estimation of this uncertainty, has indeed led to consistent values. This approach uses for the first values an enlargement factor to derive a global uncertainty which takes account of the values available, given the insufficient state of knowledge associated with the very small data set, different qualities of the contributions, and potential hidden dependencies. Such an enlarged preliminary uncertainty also indicates that there are likely to be factors influencing the frequency that are not covered in the same complete way in the initial phase of a new standard compared to later phases. With time, an experiment matures and the confidence in its operation increases, which ought to lead to a better and more precise uncertainty estimation as well as the possibility of identifying those components that best improve the experiment. This procedure therefore seems to be an adequate method of operation until new and improved understanding becomes available.
The criteria can be summarized as: (i) The primary requirement for inclusion in the CIPM LoF is the existence of a peer-reviewed publication (or at least an official acceptance for publication by the journal) at the time of consideration. This pre-supposes the suitability of the radiation for frequency, length and other precision metrology as determined by the WGFS. (ii) When only one frequency value is available from a single laboratory, the estimated standard uncertainty adopted is typically a factor of three larger than the uncertainty quoted in the published paper. Depending on the information available to the WGFS at the time concerning measurement data and conditions, the WGFS may consider it appropriate to expand the uncertainty by a further factor, or round the final result. (iii) When two values are available (e.g. from a single laboratory at different times, or from two laboratories), the frequency value adopted is the mean value weighted by the respective published uncertainties. These uncertainties are combined in quadrature, and then a factor of two to three is applied to this combined value to give the estimated standard uncertainty for the CIPM LoF. In this way, more reliance is placed on the value with lower uncertainty. (iv) For frequencies with three or more data values submitted, the value adopted is the weighted mean. For situations where the values have individual uncertainties which are of a similar magnitude (e.g. all within a factor of five), the situation is such that statistical analysis can be applied, but with some recognition that this may still not be a fully robust procedure.
These rules have been applied over the last two decades for individual frequency values derived from a direct comparison with the caesium atomic clock. With the availability of high accuracy direct frequency ratios between (mostly optical) frequency standards, the above stated rules were amended: (v) For frequencies linked to other frequencies in the CIPM LoF by direct or indirect frequency comparisons with sufficiently low uncertainty, the recommended frequency value results from a least squares analysis of the relevant data. The uncertainties include the estimated correlations between the different measurements. Rules (ii)-(iv) are applied accordingly to single measurements of frequency ratios.
This procedure helps to cope with different aims, such as the consistency of the frequency scale and the derivation of realistic uncertainties that can be used in the commonly accepted framework of the guide to the expression of uncertainty in measurement (GUM) [65]. The global uncertainty derived for the listed radiations needs to be estimated to ensure consistency with future values and potentially tighter uncertainties, and to avoid discrete steps in frequency value of a magnitude larger than the combined uncertainties of previous and future values. This is also important to ensure that discontinuities in the SI second are avoided if the new data is, for example, incorporated into a new definition of the second. Furthermore, the uncertainty of the recommended frequency will often be used as one input data point for an uncertainty budget with several other input data. Following the GUM [65] all independent contributions will be added in quadrature, tacitly assuming that the probability density of the particular contrib ution is Gaussian. This is only justified if the central limit theorem applies to a good approximation, which is definitely not the case if there are only one or two entries. In such a case the connection between the standard uncertainty with expansion factor k = 1, k = 2 and k = 3 and the confidence interval 68.27%, 95.45% and 99.73%, respectively, for an infinite number of measurements (degrees of freedom) completely breaks down. For one degree of freedom, the more appropriate Student's t-distribution 6 shows that the interval that encompasses a fraction of 68.27%, 95.45% and 99.73% of the distribution would have to be enlarged by a factor of 1.84, ~7 and ~78, respectively, as compared to the Gaussian distribution. By using the term 'estimated standard uncertainty', in general one thinks of 68% coverage. Here, the Student corrections for one or two measurements already make up a considerable fraction of the factors 2-3 applied by the WGFS.
Looking at the actual coverage in hindsight, if about every third value is actually outside these limits, this could give a hint about the validity of the adopted interpretation of the initial recommended value. Even though the data base of the CIPM LoF is still very small for such an investigation, in several cases our current best frequency estimate based on more measurements is very close to such a confidence limit of the first recommendation. Examples include 115 In + and 88 Sr. This observation lends support to the interpretation that the uncertainty of the recommended frequency values at these early stages can also be regarded as the typical estimated standard uncertainties.
It is interesting to compare this procedure with that of the Committee on Data of the International Council for Science (CODATA) [66]. 'This group calculates the weighted mean and uncertainty for measurements from several laboratories and normally takes a simple weighted mean and weighted uncertainty and then checks the chi-squared. If the data set is not consistent with the calculated distribution, the practice is to multiply the variances of all the measurements by the same multiplier (in some cases this has been as large as 15) and again take a simple weighted mean and weighted uncertainty. Again, the chi-squared is calculated to ensure that the calculated mean and uncertainty are consistent with the measurements. In contrast to the procedure of the WGFS, the CODATA group does not multiply the variances of only one or some of the measured values. The variances of all the measurements are all multiplied by the same value. If one measurement dominates over the others and is consistent with the others, then no multiplication is performed before taking the weighted mean' [67].
Iodine stabilized laser at 474
THz. The iodine stabilized laser at 474 THz has for a long time been the most prominent laser standard for the realization of the metre. Extensive intercomparisons with the lasers maintained at the BIPM allowed the CIPM to reduce the estimated standard uncertainty from 3.4 × 10 −10 (1984) to 2.5 × 10 −11 (1992) and 2.1 × 10 −11 (2003). Comparisons with the laser BIPM4, which essentially served as a practical realization of the metre, showed that the mean of all lasers in the different National Metrology Institutes agreed with their frequencies within about 2.5 kHz [68,69]. In 2003 [70] the absolute frequency was directly measured for BIPM4 and found to be 473 612 335 605.4 kHz with a combined uncertainty of 1.8 kHz. This is a value in close agreement with the value adopted for this radiation in the list of recommended radiations for the realization of the metre, further attesting the successful implementation of the metre up to that date. Nevertheless the recommended uncertainty was kept at 10 kHz since most of the lasers used the same design and optical set-up (which was not specified in the recommendation) and where other configurations seemed to have a larger influence on the stabilized frequency. The formal work of validation and implementation of the metre is today organized under the CCL key comparison CCL-K11 based on frequency comb techniques, and is reported to the WGFS. 87 Sr transition in an optical lattice (points) and associated uncertainties for N measurements together with the frequency values (purple bars) recommended by the CIPM and the associated uncertainty bands (pink bands). N = 1: [73]; N = 2: [74]; N = 3: [75]; N = 4: [76]; N = 5: [77]; N = 6: [78]; N = 7: [79]; N = 8: [80]; N = 9: [81]; N = 10: [61]; N = 11: [82]; N = 12: [84]; N = 13: [85]; N = 14: [87]; N = 15: [88]; N = 16: [62]; N = 17: [62]; N = 18: [89]; N = 19: [90]. See text. laboratories to devise a recommendation, but did not consider the earlier values [71,72] that were not consistent with later ones. After approval by the CCTF and CCL, the CIPM in 2006 recommended the frequency 429 228 004 229 877 Hz with an estimated fractional uncertainty of 1.5 × 10 −14 , equivalent to 6.4 Hz. This frequency value and the assigned uncertainty are shown in figure 4 in the left section by the purple horizontal bar and pink area, respectively. At the meeting of the WGFS in 2009, four new frequency measurements were available [76][77][78][79]. Two of them came from the same laboratory (JILA [76,78]) with the second one having a threefold reduced uncertainty. Hence, only the latter one was included, together with the two values from France and Japan [77,79], to derive the weighted mean of 429 228 004 229 873.7 Hz with a fractional uncertainty of 1 × 10 −15 which was subsequently recommended by the CIPM [25]. This low uncertainty allowed the CIPM to recommend the Sr lattice clock transition as an SRS. Two new measurements [80,81] were performed for the next evaluation in 2012 and a weighted mean of these five values sees the frequency value reduced by 0.3 Hz. The fractional uncertainty was kept at 1 × 10 −15 since the later measurements seemed to have a slightly lower value compared to the earlier ones. The new value was recommended by the CIPM in 2013 [26].
For the 2015 evaluation there were seven new measurements available [61,[82][83][84][85][86][87]. Together with the previous measurements, the new measurements (except for one) were used to derive a new recommendation based on a weighted mean. The measurement of Hachisu et al [86] was omitted because it was essentially based on the measurement of Falke et al [84]. In this evaluation the first optical frequency ratio measurements were also introduced, in the way described in more detail below. Frequency ratios connected the 87 Sr value with the values of the 171 Yb and the 199 Hg transitions in lattice clocks and in the 40 Ca + single-ion clock. Due to the large number of low uncertainty 87 Sr data the inclusion of these frequency ratios did not have much influence on the 87 Sr value itself, but were extremely helpful in tying down the uncertainties of other frequencies linked with the 87 Sr values by the measured ratios.
The latest evaluation results from 2017 included five more direct frequency measurements with respect to the caesium clocks, and frequency ratio measurements with respect to other optical and microwave standards. The last two measurements (number 18 and 19) did not use a local primary frequency standard but were related to TAI. All the new measurements with low uncertainty were slightly below the recommendation of 2015. The outcome of the latest adjustment-to be discussed in more detail below-used all 19 values displayed in figure 4. As a result, the recommended frequency was reduced by 0.2 Hz and the fractional uncertainty was reduced to 4 × 10 −16 . This uncertainty is not much higher than the relative uncertainty in realizing the SI Hz with the best primary caesium fountains. The estimated uncertainty was based on the comparison, via a fibre link, between primary standards [91] which included the uncertainties of the primary standards as well as the contribution of the fibre link.
Inclusion of optical frequency ratios
As pointed out above, the inclusion of optical frequency ratios and optical-to-microwave ratios has changed the evaluation procedure substantially. Besides a number of direct frequency measurements compared directly against the caesium atomic clock, there are a number of optical frequency ratios between optical atomic clocks that have been determined (figure 4) with much smaller uncertainties than would be possible if caesium clocks or other microwave clocks were involved. They include 27 Al + / 199 Hg + [58], 40 Ca + / 87 Sr [36], 171 Yb + (E3)/ 171 Y b + (E2) [37], 199 Hg/ 87 Sr [38], 171 Yb/ 87 Sr [92] or 199 Hg/ 87 Rb [93]. Such frequency ratios have already been used to determine new recommended values for the frequencies of optical frequency standards and SRS [27]. Together with the direct absolute frequency measurements with respect to the caesium clocks, these frequency ratio measurements form an overdetermined set of data. It can be foreseen that optical frequency ratio measurements will involve an increasing number of the frequency standards (figure 5) and may also include new ones.
Margolis and Gill proposed and applied a least squares method to determine the 'best' estimates of the frequency values [40] from such a set of overdetermined measurements. All validated frequency measurements and frequency ratio measurements are prepared as frequency ratios with the direct frequency measurements against the caesium primary standard also expressed in frequency ratios. The fact that the input data set consists of frequency ratios makes this a nonlinear least squares problem requiring linearization and iterations to find an acceptable solution. The adjusted frequency values can be used to determine other frequencies if the frequency ratio is to be measured later. Independent programmes are available and have been used to validate the codes. One of those devised by Robertsson [41] uses a slightly different conceptual approach which is based on the examination of closed loops in a graph theory framework [94]. Such closed loops can be easily recognized in figure 5, e.g. by the threenode single loop comprising 171 Yb + (E2), 171 Yb + (E3), 133 Cs or the four-node loop comprising 171 Yb, 87 Sr,199 Hg, 133 Cs. To circumvent the non-linearity of the ratios the logarithms of the frequencies are used, leading to a linear least squares problem. Similar to a three-cornered hat analysis, the logarithms of all frequency ratios should add up to zero. This provides a set of conditions which in a Lagrange multiplier scheme helps to identify the basis vectors for the residual space in the least squares calculation. A projection on this subspace gives the corrections in the experimental ratio values.
These methods for using all available experimental frequency data with their proper weights lead to a system of adjusted values that are more robust against outliers as compared to isolated frequency ratios. With the two methods discussed above, such outliers can even be identified, as has been demonstrated in [40]. If such outliers are not identified, the whole system of recommended frequencies can be affected by an erroneous frequency ratio measurement with underestimated uncertainty. In the same way, correlations between single measurements, if not properly identified, can have similar effects. Consider the case of two frequency measurements of, for example, 171 Yb + (E2), 171 Yb + (E3) in figure 5 performed at the same time against the same Cs clock. Any increase in the Cs frequency will immediately lead to a correlated reduction of the uncertainties of both frequency ratios 171 Yb + (E2)/ 133 Cs and 171 Yb + (E3)/ 133 Cs. Unidentified correlations would consequently overestimate the weight of these two particular measurements in the system of frequencies.
Thus the WGFS tries to estimate such correlations quantitatively. The GUM gives rules to calculate or estimate the correlation coefficients whose square is used in the two methods given above for the correlation matrix.
There might, however, be less-controllable sources of correlations that are harder to quantify. It has been shown recently that the SI-traceable measurement of an optical frequency can be performed at the low 10 −16 level without a local primary standard, but referenced to TAI [90]. In this case any frequency ratio measurement against a local caesium clock that contributes to TAI during such a measurement will show residual correlations with any other optical frequency standard that is measured against TAI. For the time being, such a correlation will be significantly reduced by the averaging process used to generate TAI but will become more prominent if two optical standards are directly measured against TAI at the same time. It will become even more pronounced if optical clock networks [95] for optical frequency ratios are employed as a matter of course. This suggests that additional rules for reporting both frequency measurements and frequency ratio measurements are needed where, for example, the links and the full data of the measurement period (including the start and end times, the time of day (UTC) and the calendar day, together with the relevant interruptions) are stated. The WGFS is developing reporting guidelines that aim to take correlation effects into account more fully.
Recent deliberations by the CCL-CCTF WGFS have also considered the potential for inclusion of high accuracy frequency ratios within an additional appendix to the CIPM LoF. Whilst any optical-Cs microwave frequency ratio necessarily includes the uncertainty associated with the primary Cs frequency value, direct optical-optical frequency ratios of, for example, SRS are capable of much lower uncertainty due to their better reproducibility than the Cs standard and the capability of comb measurements at uncertainty levels even below the optical reproducibilities. Such a procedure, however, has not yet been decided.
Towards a new definition of the SI second
It was obvious for a long time that a much lower inherent uncertainty, and a much higher relative stability, of the clock frequency could be realized with clocks operating on an optical transition rather than a microwave transition. The process initiated in 2001 led to the establishment of SRS in order to investigate their suitability for a future redefinition of the SI second and to utilize them in the realization of TAI with the prospect of improved time scales. Fifteen years later, nine SRS are available ( 87 Rb as a microwave standard and eight optical SRS). The 87 Rb standard and the Sr lattice clock at SYRTE are beginning to contribute regularly to TAI and it has been shown that a time scale can be established based on an optical clock that is superior to one based on even the best caesium fountain clocks [61][62][63]. By introducing more of the SRS and possibly replacing less accurate clocks like hydrogen masers or caesium beam clocks at the same time, the SRS could gradually begin to improve the TAI and UTC time scales.
In the meantime, optical atomic clocks, optical long haul links and the various methods of all-optical frequency metrology are finding widespread applications and have even led to the creation of novel fields like relativistic geodesy [96,97]. It is well known that according to General Relativity two optical clocks in a different gravitational potential show different frequencies [98] when compared, and this effect has been taken into account for a long time when comparing microwave atomic clocks in the international time scales TAI and UTC. But with the achieved accuracy and stability of optical atomic clocks it becomes possible to use this effect to determine the difference in the gravitational potential of two locations on Earth. In time and frequency metrology, apart from the creation of better time scales, the distribution of highly accurate and extremely stable optical frequencies via fibres to many customers [99] may lead to new services or allow synchronization of clocks over large distances. For tests of fundamental theories or the question of the constancy of fundamental constants [37,100], optical atomic clocks are the measuring devices of choice to grasp the first hints of new physics. In space technology and astronomy, the ultra-precise tracking of spacecraft and the improved reference systems for very long baseline interferometry, respectively, will benefit from the optical clocks. Thus, there is a growing community that will benefit from a redefinition of the second in terms of an optical clock transition, and so the questions of when the time is right to redefine the unit of time, what the necessary requirements are, and possible time scales for such a process are becoming increasingly relevant and urgent [101][102][103]. The Working Group of Strategic Planning (WGSP) of the CCTF has thus devised a roadmap to accompany this process, which is outlined in the following section.
Milestones on a roadmap towards a redefinition of the second
From figure 2, one expects that the uncertainties of optical frequency standards will continue to reduce over the coming years. The recent developments in novel excitation schemes [104][105][106], 3D confinement of quantum absorbers [107], and new strategies for the reduction of systematic shifts [108] lend promise to this expectation that fractional uncertainties below 10 −18 could be reached. However, limited knowledge about the exact gravitational potential suggests difficulties in the use of practical time scales at this level on Earth. With fractional inaccuracies in the 10 −18 regime, the geopotential has to be determined to the cm-level to account for gravitational redshift. From this point of view, the time would be right for a new definition when the optical clocks have furnished proof that their typical performances reach fractional uncertainties of around 10 −18 , which would be roughly two orders of magnitude lower than that expected of the best caesium fountains of the time.
There are several ways to properly verify when such a hundredfold improvement in the potential accuracy of optical clocks over caesium primary clocks has been achieved. Optical clocks of the same type, e.g. the Sr lattice clocks at SYRTE, NPL and PTB can be compared with such an uncertainty in the different laboratories linked via already-established fibre links [95,103,109]. Frequency comparisons between remote clocks can also be performed using transportable optical atomic clocks [110,111]. A third option for such a comparison can be based on different measured frequency ratios and the associated evaluations between remote optical clocks in the way suggested below.
From these considerations, one could define the first two milestones to be reached before a new definition can take place. The time for a new definition is right when 1. at least three different optical clocks (either in different laboratories, or of different species) have demonstrated validated uncertainties of about two orders of magnitude better than the best Cs atomic clocks of the time. 2. at least three independent measurements of at least one optical clock from milestone 1 have been compared in different institutes (with, e.g., Δν/ν < 5 × 10 −18 ) either by transportable clocks, advanced links, or frequency ratio closures.
To assure continuity between the present definition and the new definition, the frequency of the selected optical clock has to be measured with respect to the best caesium fountain atomic clocks with uncertainties essentially determined by the fountain clocks. Thus, the time for a new definition is right when 3. three independent measurements of the optical frequency standards of milestone 1 with three independent Cs primary clocks have been performed, where the measurements are limited essentially by the uncertainty of these Cs fountain clocks (with, e.g., Δν/ν < 3 × 10 −16 ).
It is highly desirable that optical clocks that have been assigned the status of SRS contribute regularly to TAI in order to improve the time scale and to further develop the technology and protocols of improved methods for comparisons. This requires another milestone. The time for a new definition is right when 4. optical clocks (SRS) contribute regularly to TAI.
To allow for closures and links between the dozen or more different optical standards and their continuous use, a fifth milestone would thus be desirable. The time for a new definition is right when 5. optical frequency ratios between a few (at least 5) other optical frequency standards have been performed; each ratio measured at least twice by independent laboratories and agreement was found to better than, e.g., Δν/ν 5 × 10 −18 .
It remains within the authority of the CIPM as to when it will make a proposition to the CGPM for a redefinition. From the current status it can be estimated that the new definition could come into effect before 2030. After a redefinition, the present standard of time and frequency would serve as an SRS where the uncertainty to realize the second would be the same as before. Improvements in the caesium atomic clocks would then be evaluated regularly within the established framework of a caesium clock as an SRS.
It should be noted that a number of the national metrology institutes will have the ability to link a chosen species for a new optical definition of the second to other optical clock species accepted as SRS, by means of femtosecond comb and optical fibre transfer techniques, without an increase in the combined uncertainties above the level of a few times 10 −18 . In this case, one would be able to realize the new definition by means of these SRS with very little increase in uncertainty.
Conclusion
Optical frequency standards, first used as vacuum wavelength standards for the realization of the metre in length metrology, have evolved into optical clocks that now find their most prominent application in frequency metrology. The clear demand in these and other fields, with novel and unforeseen applications, has led to a single list of recommended frequency standard values for applications including the practical realization of the metre and SRS. These frequency values and their uncertainties have been determined with coherent procedures as described in this publication. Even though the rapid progress in optical frequencies has been less beneficial for dimensional length measurements under ambient conditions, i.e. where the index of refraction matters, length measurements in space will also benefit from the new technologies and platforms that use optical frequencies. The newly-established analyses and procedures for deriving a coherent list of recommended frequencies from an overdetermined set of measurements are leading to a transparent and robust system of high reliability and low uncertainty. It is, furthermore, a solid basis that can lead to a future new definition of the SI unit of time, the second.
|
2018-02-26T06:12:18.883Z
|
2018-02-14T00:00:00.000
|
{
"year": 2018,
"sha1": "8c1086c13ab9153b2154ac5d4eed6a668119f182",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1088/1681-7575/aaa302",
"oa_status": "HYBRID",
"pdf_src": "IOP",
"pdf_hash": "a1e684b909a62817241d0f3c57b638b771d42012",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
226229633
|
pes2o/s2orc
|
v3-fos-license
|
Are early measured resting-state EEG parameters predictive for upper limb motor impairment six months poststroke?
OBJECTIVES
Investigate whether resting-state EEG parameters recorded early poststroke can predict upper extremity motor impairment reflected by the Fugl-Meyer motor score (FM-UE) after six months, and whether they have prognostic value in addition to FM-UE at baseline.
METHODS
Quantitative EEG parameters delta/alpha ratio (DAR), brain symmetry index (BSI) and directional BSI (BSIdir) were derived from 62-channel resting-state EEG recordings in 39 adults within three weeks after a first-ever ischemic hemispheric stroke. FM-UE scores were acquired within three weeks (FM-UEbaseline) and at 26 weeks poststroke (FM-UEw26). Linear regression analyses were performed using a forward selection procedure to predict FM-UEw26.
RESULTS
BSI calculated over the theta band (BSItheta) (β = -0.40; p = 0.013) was the strongest EEG-based predictor regarding FM-UEw26. BSItheta (β = -0.27; p = 0.006) remained a significant predictor when added to a regression model including FM-UEbaseline, increasing explained variance from 61.5% to 68.1%.
CONCLUSION
Higher BSItheta values, reflecting more power asymmetry over the hemispheres, predict more upper limb motor impairment six months after stroke. Moreover, BSItheta shows additive prognostic value regarding FM-UEw26 next to FM-UEbaseline scores, and thereby contains unique information regarding upper extremity motor recovery.
SIGNIFICANCE
To our knowledge, we are the first to show that resting-state EEG parameters can serve as prognostic biomarkers of stroke recovery, in addition to FM-UEbaseline scores.
Introduction
Stroke is a major cause of adult disability worldwide (Sacco et al., 2013). In the early phase, about 80% of stroke survivors suffer from motor impairments of the upper extremity (Langhorne et al., 2009). Recently, five subgroups of stroke patients were identified, based on their highly heterogeneous patterns of motor recovery within a time window of ten weeks poststroke (Vliet et al., 2020). Motor recovery is largely independent of the type of therapeutic intervention and is referred to as spontaneous neurological recovery (Duncan et al., 1992;Kwakkel et al., 2004). Motor impairment, measured by the Fugl-Meyer motor assessment of the upper extremity (FM-UE), includes patients' stereotypical co-articulation of multiple joints (Krakauer and Carmichael, 2017). This so-called synergy dependency in motor control appears to be a major limitation when performing selective, dissociated movements (Twitchell, 1951). Several studies showed that one of the best early phase predictors of chronic motor impairment, reflected by the FM-UE six months after stroke, is the FM-UE scores at baseline (Prabhakaran et al., 2008;Winters et al., 2015). However, recent studies showed that the variation in degree of recovery between subjects may range from non-recoverers to excellent recoverers, suggesting that FM-UE measured at baseline (FM-UE baseline ) in itself may not be an optimal predictor (Prabhakaran et al., 2008;Winters et al., 2015;Vliet et al., 2020). As noted by the Stroke Recovery and Rehabilitation Roundtable (SRRR) task force, there is an urgent need for complementary prognostic biomarkers in addition to clinical assessments to optimize the accuracy of current prediction models for spontaneous motor recovery (Boyd et al., 2017;Ward, 2017). This is particularly important as early poststroke clinical assessments may not be able to distinguish patients who will show spontaneous upper limb motor recovery from those who will not (Vliet et al., 2020).
Parameters derived using structural imaging techniques showed that corticospinal tract (CST) integrity has predictive value for motor recovery (Puig et al., 2017;Rondina et al., 2017;Lin et al., 2019). The predictive value of motor evoked potentials and asymmetry in fractional anisotropy of the posterior limbs of the internal capsules also indicates a role for the CST regarding motor recovery (Byblow et al., 2015). Next to these structural imaging characteristics, potential biomarkers that might be associated with motor outcome include derivatives of cortical activity, which can be recorded using electroencephalography (EEG) (Ward, 2017). The level of cortical deficits after stroke may be quantified by resting-state EEG, as altered resting-state cortical activity has been associated with motor dysfunction (Carter et al., 2012;Guggisberg et al., 2019). Resting-state EEG recording is specifically suitable for the stroke population early after onset, since it is portable, non-invasive and does not require voluntary motor performance with the paretic upper limb.
Hemispheric stroke has been associated with altered lowfrequency oscillations in the delta and theta bands (van Putten and Tavy, 2004;Andraus and Alves-Leon, 2011;Finnigan and van Putten, 2013;Britton et al., 2016), whereas unaltered alpha activity seems to be associated with healthy brain activity (Bazanova, 2012). A combination of these spectral characteristics can be expressed by the delta/alpha ratio (DAR). This ratio may more sensitively reflect the severity of neurological deficits compared to the individual spectral components, as, for instance, delta activity may increase with or without decreased alpha activity. Unilateral stroke may also affect the activity of the cortical areas involved through modified spectral power distributions over the hemispheres. This power asymmetry can be quantified via the pairwise-derived brain symmetry index (BSI) (Sheorajpanday et al., 2009) and directional BSI (BSIdir) (Saes et al., 2019).
Quantitative resting-state EEG parameters such as DAR and BSI, measured early poststroke, are predictors of future global neurological deficits reflected by the National Institutes of Health Stroke Scale (NIHSS) and degree of dependency assessed with the modified Rankin Scale (mRS) (Finnigan et al., 2007;Sheorajpanday et al., 2011;Finnigan and van Putten, 2013;Bentes et al., 2018;Doerrfuss et al., 2020). Furthermore, recent analyses showed that BSI calculated over the delta frequency band (BSI delta ) was longitudinally associated with FM-UE, whereas DAR, DAR of the affected hemisphere (DAR AH ), BSI, BSIdir over the delta (BSI delta ) and theta band (BSIdir theta ), were longitudinally associated with NIHSS (Saes et al., 2020). However, the potential of EEG parameters to serve as additional prognostic biomarker when combined with clinical scores regarding upper limb motor recovery poststroke remains unclear (Doerrfuss et al., 2020).
The first objective of the current analysis of the prospective cohort study named 4D-EEG was to investigate whether early measured resting-state EEG parameters have predictive value regarding motor impairment of the paretic upper limb, as reflected by FM-UE, six months after a first-ever ischemic stroke. We expected this to be true, especially for the BSI and BSIdir over the low frequency bands which were previously found to be longitudinally associated with FM-UE (Saes et al., 2020). Our second aim was to investigate whether these resting-state EEG parameters have prognostic value in addition to FM-UE measured at baseline.
Participants
All patients who were admitted to the stroke units of six participating hospitals between June 2015 and June 2017 were potentially eligible for participation. The inclusion criteria were: (1) first-ever ischemic stroke according to CT or MRI scan; (2) less than three weeks poststroke; (3) upper limb paresis (NIHSS 5a/b > 0); (4) ! 18 years of age; and (5) having provided written informed consent. Exclusion criteria were: (1) upper extremity orthopedic limitations prior to stroke onset; (2) recurrent stroke; (3) severe cognitive problems, i.e. Mini Mental State Examination score < 18; (4) other neurological deceases; and (5) using medication which is likely to affect neuronal oscillations. The study was registered at the Netherlands Trial Register (NTR4221), approved by the Medical Ethics Committee of the VU University Medical Center, Amsterdam, The Netherlands (4D-EEG: NL47079.029.14) and carried out in accordance with the Code of Ethics of the World Medical Association (Declaration of Helsinki, 2013) (World Medical Association, 2013). Analyses were performed on longitudinal data of 39 participants, which were also used for analyses in Saes et al. (2020). There was no overlap regarding participants included in Saes et al. (2019). All participants received usual care according to the Dutch stroke guidelines for physical therapy (Veerbeek et al., 2014).
Procedures
The baseline measurement involved an EEG recording and a clinical assessment performed on consecutive days, as soon after stroke onset as feasible, but at least within the first three weeks. EEG recordings were performed in a specially equipped van (Saes et al., 2020). This provided the opportunity to visit patients at their place of residence, to limit their burden and ensure standardization. FM-UE was performed as part of the baseline clinical assessment (FM-UE baseline ) and repeated at 26 weeks poststroke (FM-UE w26 ).
Electroencephalography
High-density 62-channel EEG was recorded using an actively shielded EEG cap with electrode placement according to the international 10-20 system at a sampling rate of 2048 Hz (Ag/AgCl electrodes and REFA multichannel amplifier, TMSi, Oldenzaal, The Netherlands, with ASA acquisition software, ANT software BV, The Netherlands). Resting-state EEG with eyes open was acquired while subjects were seated and focused their eyes on a dot displayed on a screen for one minute. Five 1-minute trials were recorded, with sufficient rest in between. Electrode impedances were kept below 20 kX. EEG signals were online referenced to average. During the EEG recording, muscle relaxation of the arms was monitored using bipolar Ag/AgCl electrodes to detect muscle activity of the m. extensor carpi radialis and m. flexor carpi radialis of both arms.
Pre-processing
Offline analysis was conducted using Matlab (R2012a, The Mathworks, Natick, MA, USA) in combination with the FieldTrip toolbox for EEG/MEG analysis (Oostenveld et al., 2011). EEG signals were filtered using a 4th-order bi-directional high-pass Butterworth filter with a cut-off at 0.5 Hz. A notch filter around 50, 100 and 150 Hz with a bandwidth of 1 Hz was used to reduce power-line artefacts, followed by a low-pass filter at 130 Hz. Channels which showed no data or very poor data quality were rejected and interpolated as the weighted average of the surrounding electrodes, followed by re-referencing to the remaining average. On average, 0.17 electrodes were interpolated per measurement. Eye-blinks and muscle activity artefacts were removed using independent component analysis based on visual inspection of the components' waveforms, power spectrum and topographic distributions. On average 2.9 components were removed per measurement. Remaining artefacts were removed during a second round of visual inspection. Modified periodograms with a Hanning window with size equal to the epoch length served as proxies of the spectral power density per channel.
2.3.2. Quantitative resting-state EEG parameters 2.3.2.1. Delta/alpha ratio. Hemispheric stroke has been associated with increased low frequency oscillations in the delta and theta band (van Putten and Tavy, 2004; Andraus and Alves-Leon, 2011; Finnigan and van Putten, 2013;Britton et al., 2016). On the other hand, unaltered alpha activity has been associated with healthy brain activity (Bazanova, 2012). The delta/alpha ratio (DAR) combines these spectral characteristics and was defined as the ratio between the mean delta power (1-4 Hz) and the mean alpha power (8-12 Hz). For every channel c the power of the delta and alpha frequency bands (f ¼ 1; Á Á Á ; 4 Hz and f ¼ 8; Á Á Á ; 12 Hz, respectively) was determined as the mean of the spectral power P c f ð Þ over this range. The DAR was computed as Subsequently, we averaged the ratios over all N EEG channels yielding the global DAR: In addition to the assessment over the whole brain, DAR was also determined over the affected (DAR AH ) and unaffected hemisphere (DAR UH ).
Brain symmetry index
The BSI represents the spectral power distribution asymmetry over the hemispheres, which may be affected due to unilateral stroke altering cortical activity. BSI was defined as the absolute pairwise normalized difference in spectral power between the homologous channels over the left c L and right c R hemisphere.
The difference was averaged over a range from 1 to 25 Hz (adapted from Sheorajpanday et al., 2009) These values were averaged over all channel pairs cp: BSI values theoretically range from 0 to 1, indicating maximal symmetry and maximal asymmetry, respectively. In (3) and (4), electrodes of the mid-line were excluded since they do not form channel-pairs. In our earlier study, we showed the relevance of the lower frequency bands for the stroke population (Saes et al., 2019). Therefore, in addition to the estimates over the entire 1-25 Hz range, BSI was also determined separately for the delta (1-4 Hz) and theta (4-8 Hz) frequency bands (BSI delta and BSI theta ).
To account for the direction of the asymmetry, we also computed the directional BSI (BSIdir) (Saes et al., 2019). The BSIdir disregards the absolute value of the numerator of the BSI calculation shown in (3). The sign of BSIdir was chosen such that values between 0 and 1 reflected greater cortical power in the affected hemisphere compared to the unaffected hemisphere, and vice versa for values between -1 and 0. Also BSIdir was determined separately for the delta (BSIdir delta ) and theta (BSIdir theta ) frequency band.
Clinical measures
is a valid and reliable clinical test reflecting motor impairment after stroke (Gladstone et al., 2002). Additional clinical assessments for subject characterization included NIHSS, Action Research Arm Test (ARAT), Erasmus MC modification of the Nottingham Sensory Assessment of the upper extremity (EmNSA), Motricity Index of the Upper/Lower Extremity (MI-UE/ MI-LE), Edinburgh Handedness Inventory and Bamford classification.
Statistics
A forward selection procedure was used to identify the strongest predictor of FM-UE w26 based on quantitative resting-state EEG. Investigated EEG parameters concerned: DAR, DAR UH , DAR AH , BSI, BSI delta , BSI theta , BSIdir, BSIdir delta , and BSIdir theta .
Subsequently, a stepwise forward selection procedure with FM-UE baseline as base model, was used to find the EEG parameter which has the most added value. The F-test was used to check whether adding a quantitative resting-state EEG parameter significantly increased the explained variance.
All statistical analyses were conducted using IBM SPSS Statistics for Windows, version 26.0 (IBM Corp., Armonk, NY, USA). For each model, the distribution of residuals was tested for normality by inspecting histograms and Q-Q plots. As is common in prediction models, the significance level of covariates was set to a < 0.05.
Participants
A total of 2095 patients were screened, 55 of whom were eligible and willing to participate in this longitudinal observational cohort study. Thirty-nine patients completed the EEG recording at baseline and the clinical assessments at baseline and 26 weeks poststroke, and were included in the analyses. A flowchart of screening, inclusion and drop-outs is depicted in Fig. 1. The EEG recording was performed at 12.3 ± 5.8 days (mean ± SD) poststroke. Clinical assessments were performed at 11.6 ± 5.3 and 185.2 ± 20.0 days poststroke, referred to as baseline and w26, respectively. In the present study, the number of days between stroke onset and the baseline clinical measurement or EEG recording was not significantly correlated with FM-UE baseline (r(37) = À0. 206, p = 0.21 and r(37) = À0.273, p = 0.09; respectively). Patient characteristics are summarized in Table 1. A complete overview of the data can be found in Supplementary Table S1.
Discussion
We investigated whether early measured resting-state EEG parameters have prognostic value regarding upper extremity motor impairment at six months poststroke in 39 patients. From the investigated quantitative resting-state EEG parameters, hemispheric power asymmetry in the theta band (BSI theta ) was the strongest prognostic biomarker of FM-UE w26 . A higher BSI theta , reflecting more asymmetry between hemispheres in the theta band, predicts a lower FM-UE w26 . Moreover, BSI theta showed prognostic value in addition to baseline FM-UE alone and increased the explained variance from 61.5% to 68.1%. This reveals that BSI theta contains unique information compared to upper extremity motor scores at baseline regarding upper extremity motor impairment at six months, and therefore has potential to serve as additive prognostic biomarker of stroke recovery.
The present study is the first to investigate the prognostic value of quantitative resting-state EEG parameters (DAR and BSI, and variations thereof) measured early poststroke regarding upper extremity motor impairment after six months, as reflected by FM-UE. Earlier studies showed that these EEG parameters could serve as predictors of global neurological impairment (NIHSS) and degree of dependency regarding daily activities (mRS) poststroke (Sheorajpanday et al., 2010(Sheorajpanday et al., , 2011Bentes et al., 2018;Doerrfuss et al., 2020).
In contrast to BSI, DAR was not a predictor of the motor function of the upper extremity at 26 weeks poststroke, although earlier studies did show prognostic value of DAR regarding global neurological impairments reflected by NIHSS at 30 days poststroke (Finnigan et al., 2007;Finnigan and van Putten, 2013) or negative functional outcome reflected by mRS ! 3 at 12 months poststroke (Bentes et al., 2018). The absence of predictive value of DAR regarding FM-UE is in line with our earlier analyses, in which we showed a longitudinal association of DAR with NIHSS, but not with FM-UE (Saes et al., 2020). Furthermore, Butz et al. (2004) showed no relation between clinical symptoms and increased delta activity near the lesion measured between one and fourteen days poststroke (Butz et al., 2004). DAR was previously found to be a predictor of NIHSS when assessed within 48 hours after stroke onset, and DAR AH was shown to be a predictor of negative functional outcome (mRS ! 3) when assessed within 72 hours, while the mean poststroke measurement time in the current study was 12.3 days (Finnigan et al., 2007;Bentes et al., 2018). It has been suggested that DAR decreases (i.e. normalizes) between 24 and 48 hours poststroke (Finnigan et al., 2007(Finnigan et al., , 2016. Therefore, although speculative, when DAR may serve as prognostic biomarker, this may be restricted to the very early stage poststroke. The present results show that BSI theta has added value in predicting upper extremity motor impairment at six months post stroke compared to the FM-UE baseline score alone. Previously, BSI was shown to be a predictor of negative functional outcome (mRS ! 3), but not when corrected for other clinical scores (Bentes et al., 2018). However, the BSI over the low frequency bands was not investigated. BSI theta may have the potential to serve as additive prognostic biomarker of motor recovery in addition to clinical measures. Our findings suggest that EEG data contains unique information regarding stroke severity, possibly as a reflection of cortical network integrity, which is required for behavioral recovery. The fact that this added value originates especially from low-frequency oscillations, is in line with the suggestion that such activity is related with reorganization. Low-frequency cortical activity may be the result of partial deafferentation of the cortex caused by a lesion which damaged cortico-cortical connections (Gloor et al., 1977;Butz et al., 2004). Furthermore, synchronous neuronal activity in the low-frequency range has been related with axonal sprouting after ischemic lesions in rats (Carmichael and Chesselet, 2002). Therefore, it has been suggested that increased low-frequency activity may be related with reorganization after stroke (Butz et al., 2004).
BSI theta showed to have prognostic value in contrast to BSIdir theta . This suggests that not just the affected hemisphere shows increased theta power compared to the less-affected hemisphere but also vice-versa. Therefore, compared to BSIdir theta , Dependent variable: FM-UE w26 . Abbreviations: FM-UE w26 , Fugl-Meyer motor assessment of the upper extremity at 26 weeks poststroke; DAR, delta/alpha ratio; AH/UH, affected/unaffected hemisphere; BSI, brain symmetry index; BSIdir, directional BSI; delta/theta, calculated over the delta (1-4 Hz) or theta (4-8 Hz) frequency band; B, regression coefficient; 95%-CI, 95% confidence interval; p, probability value; significance level was set to a < 0.05, significant p-values are displayed in Bold font; b, standardized beta; R 2 , explained variance.
Table 3
Regression coefficients of early measured EEG parameters in addition to FM-UE baseline scores to predict FM-UE score at six months poststroke. BSI theta might be a better reflection of neuronal damage with predictive value.
Limitations
Despite using a specially equipped van that allowed to visit patients at their place of residence to limit their burden (Saes et al., 2020), a number of patients (14 out of 55) dropped out during this longitudinal observational study. The measurement protocol presented here was part of a larger serially conducted protocol of the 4D-EEG study, and the resting-state condition recording took only a few minutes of the quite extensive EEG recording protocol. The protocol as actually performed was adjusted for each patient to ensure feasibility and prevent overloading. However, five patients experienced the measurements as too exhausting, especially regarding the combination of their usual care and participating in research. Generalizability was analyzed by performing a crossvalidation using a leave-one-out procedure, which showed that the standard error of the estimate increased by only 3.4% compared to the model based on all data. External cross-validation using an independent dataset should be performed to confirm presented findings. The maximum NIHSS score observed at inclusion was 15, indicating that our sample does not contain severely affected patients and may suffer from sampling bias. Nevertheless, we see a large variety in upper extremity motor deficits reflected by FM-UE, which was the focus of our study. Inclusion of severely affected stroke patients would most likely have increased the number of patients with low FM-UE scores. Since in those patients FM-UE baseline may be limitedly informative regarding the prediction of their recovery (Van der Vliet et al., 2020), we expect the additive predictive value of EEG to increase. This requires further investigation. Methodological procedures resulted in a time delay of about twelve days between stroke onset and the first measurement. Therefore, we could not quantify possible changes regarding neurological deficits in the first days after stroke onset. In addition, the present study focused on DAR and BSI (and variations thereof), while other quantitative EEG parameters, such as delta-theta/ alpha-beta ratio or relative powers per frequency band, might have prognostic value as well (Bentes et al., 2018). Furthermore, in the current study MRI data was unavailable for a large proportion of the patients. Finally, the present study was focused on FM-UE, which is the clinical assessment closest related with neurological impairment. However, BSI theta also showed potential for prediction of upper limb capacity reflected by ARAT as outcome measure, emphasizing its robustness (Supplementary Tables S2 and S3).
Future directions
Prediction modeling for the identification of patients who show recovery poststroke is of high interest in the current literature. A recently proposed mixture model classifies stroke patients into five recovery groups based on initial FM-UE scores and their recovery pattern (Vliet et al., 2020). Moderately and severely affected patients in particular were shown to be often misclassified and may benefit from additional prognostic biomarkers (Winters et al., 2015;Vliet et al., 2020). It remains to be investigated whether quantitative resting-state EEG parameters improve the accuracy of the mixture model, and thereby improve the identification of severely affected patients who will show recovery. Allowing to take early changes into account, the first EEG recording should preferably be performed within the first days after stroke and repeated more frequently within the first weeks.
Second, the current study only concerned the recovery of FM-UE scores, which is assumed to be a clinical measure most closely related to behavioral restitution. However, it is known that FM-UE scores suffer from ceiling effects after three months poststroke (Gladstone et al., 2002). Kinematic or kinetic performance assays, such as selective elbow extension during restrained reaching, finger individuation or pinch and grip strength, may be more finegrained and responsive to behavioral restitution as a reflection of true neurological repair .
Furthermore, the additive prognostic value of very early derived EEG parameters (<72 hours poststroke) above clinical measures has still to be established. Limiting the number of electrodes lowers the burden of patients and increases feasibility of performing EEG recordings in the acute phase. For example, a previous study showed that quantitative EEG parameters derived from only four electrodes have prognostic value regarding cognitive functioning (Schleiger et al., 2014). The minimum number and exact location of the EEG electrodes to obtain data with added value regarding motor recovery in the very early phase has yet to be investigated.
Finally, besides quantitative resting-state EEG parameters, several other parameters can be derived from EEG data, which should be considered. An example is the dynamic signal propagation between active cortical sources during a sensory stimulation task, derived from a combination of EEG, MRI and diffusion MRI data (Filatova et al., 2018). This technique enables the association between the quality of task-specific signal propagation and functional recovery to be investigated serially during motor recovery after stroke. Furthermore, a MEG study in stroke patients showed that reduced movement-related beta desynchronization is related to the level of motor impairment (Rossiter et al., 2014). Also EEG parameters reflecting the quality of functional network organization within and between hemispheres might be of interest for understanding which patients show recovery after stroke and which do not (Nicolo et al., 2015;Guggisberg et al., 2019). For example, inter-regional synchronization of neural oscillations in the first weeks after stroke has been associated with improvement of motor function (Nicolo et al., 2015). It remains to be investigated how these parameters develop longitudinally within the different subgroups of proportional recovery (Vliet et al., 2020) and whether they may serve as prognostic biomarkers for the outcome at six months poststroke.
Declaration of Competing Interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
|
2020-11-03T14:06:54.342Z
|
2020-11-03T00:00:00.000
|
{
"year": 2020,
"sha1": "1d17fba6c1de9b33d33dd33941c9da6b7776a702",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1016/j.clinph.2020.09.031",
"oa_status": "HYBRID",
"pdf_src": "Elsevier",
"pdf_hash": "1d17fba6c1de9b33d33dd33941c9da6b7776a702",
"s2fieldsofstudy": [
"Psychology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
3345313
|
pes2o/s2orc
|
v3-fos-license
|
Effectiveness of psychosocial interventions in abused children and their families
Background: Child abuse is a significant public health and social problem worldwide. It can be described as a failure to provide care and protection for children by the parents or other caregivers. This study aimed at evaluating the effectiveness of psychosocial interventions in abused children and their families. Methods: This quasi-experimental study was conducted in the psychosocial support unit of a pediatric hospital in Bandar Abbas, Iran, from 2012 to 2013. The participants consisted of child abuse cases and their parents who referred to the psychosocial support unit to receive services. Services delivered in this unit included parenting skills training, psychiatric treatments, and supportive services. The effectiveness of the interventions was assessed with Child Abuse Questionnaire, General Health Questionnaire (GHQ), and Strengths and Difficulties Questionnaires (SDQ). Participants were assessed at baseline, at 3, and 6 months follow-ups. ANOVA with repeated measures and Friedman test were used to evaluate the effect of the interventions. Results: A total of 68 children and their parents enrolled in this study, of whom 53% were males. Post-intervention follow-ups revealed significant changes in mothers' general health questionnaire (p<0.001), and children's conduct problem (p<0.05), hyperactivity (p<0.001), and peer problems (p<0.05). Physical and emotional abuses significantly decreased (p<0.001). Conclusion: Our findings revealed that psychosocial interventions effectively improved child-parents interaction and mental health of parents. The effectiveness of interventions based on subgroup analysis and implications of the results have been discussed for further development of psychosocial interventions in the health system.
Introduction
Evidence shows that child abuse, as a health problem in Iranian families, needs to be addressed (1,2). According to the ecological models of child development (3), definition of child abuse as a multifaceted problem requires a context to explain all the influential factors in a systematic approach (4)(5)(6) and address the risk factors and multiple support factors at individual, familial and, social levels and their interaction (7).
At the individual level, children's characteristics, name-ly, behavioral problems, physical aggression, antisocial behaviors, poor emotional adjustment, distraction, negative emotions, difficult temperament, developmental retardation, and physical disabilities (8)(9)(10)(11)(12)(13)(14); at the familial level, deficits in parenting skills, and wrong parenting attitudes (15,16), mental health problems of the parents, ie, depression and stress (7,(17)(18)(19)(20)(21), addiction, and substance abuse (22,23); and at a larger perspective, the living status including economic and social status (24,25) 2 lack of social support networks, and local communication (26) are among the main risk factors of child abuse. Child abuse risk factors are targeted and addressed in interventional programs, particularly family interventions (27). In the literature on child abuse prevention, the role of family interventions in this respect has been greatly emphasized (16,(28)(29)(30). Evidence shows that programs focusing on a combination of changing attitudes, enhancing knowledge, and parenting skills are more effective than programs focusing on just one parameter (31). In addition, the more successful programs are those that include group trainings, personal counseling, and house visits at the same time (32)(33)(34).
Although child abuse commonly occurs in the Iranian families, most children and their parents do not receive adequate and efficient support (1,35). The organizations providing services to abused children in Iran such as the Ministry of Health, the Welfare Organization, and the Imam Khomeini Relief Foundation are not coordinated with each other, and this decreases the efficacy of their services. To overcome this problem and fill the gap among these organizations and their services, some interventions were designed and implemented from the diagnosis level to different therapeutic and supportive services. Considering the ecological approach and the complex nature of child abuse, this study aimed at designing an efficient and feasible intervention model. The primary outcome measure in this study was reducing the frequency of child abuse, and the secondary outcome measure was improving mental health of parents and decreasing the problems of children.
Study Design
This prospective quasi-experimental pilot study was conducted to design and establish a child support unit in a hospital, and its chief objective was to provide an efficient and feasible intervention model and assess its effectiveness for abused children and their families.
Participants
The target populations were the abused children and their families in a pediatric hospital in Bandar Abbas, Iran, from 2012 to 2013 who referred to the psychosocial support unit, a specialty unit providing psychosocial support services for the referred patients. In this study, child abuse included physical, emotional, and sexual abuse. Sample size was calculated to be 50 assuming alpha = 0.05, beta = 0.2, study power of 80%, and impact size of 4. To compensate for the possible dropouts or loss to follow up, 68 participants were included.
The inclusion criteria included children residing in Bandar Abbas, who were diagnosed as cases of child abuse by the physicians, and their parents were cooperative and consented to their participation in the study. The exclusion criteria were abused children whose parents did not consent to their participation in the study, those residing in other cities, and those who could not be accessed.
Assessment Tools
The following tools were used in the present study: 1. Demographic Questionnaire: This questionnaire included data about the child and parents such as gender, living status, order of child in the family, parents' marital status, occupational status, education level, age, history of substance abuse, and history of mental disorders in parents.
2. Child Abuse Checklist: This checklist was designed by Arabgol et al. (35) using the World Health Organization's definition of violence and taking into account various forms of child abuse. This checklist evaluates all forms of child abuse as follows: physical, emotional, and sexual abuse.
3. Child's Strengths and Difficulties Questionnaire (SDQ): This questionnaire is a short screening tool with 25 phrases. Each question can be answered with "certainly true", "somewhat true", and "not true" options and evaluates 5 major subgroups of psychological symptoms: conduct problems, hyperactivity, emotional symptoms, peer problems, and prosocial behavior. The sum of the first 4 subgroups comprises the total score of difficulties (36,37). Tehranidoost et al. (38) evaluated this questionnaire in an Iranian population and its sensitivity was calculated to be 74%. 4. General Health Questionnaire (GHQ): The 28question General Health Questionnaire (GHQ-28) was designed by Goldberg & Hillier (1979) by factor analysis of its longer version. This questionnaire contains questions assessing the individual's mental status in the past 1 month and includes signs, namely, abnormal thoughts and feelings, and aspects of observable behaviors emphasizing the present situation. This questionnaire has the ability to measure various aspects of mental health such as physical manifestations, anxiety, insomnia, and depression. The total score of an individual is obtained by summing the 4 subscale scores. For scoring, each answer from right to left is allocated a score of 0, 1, 2, or 3. Ebrahimi et al. (2007) reported the cut-off point of 24 for this questionnaire, with sensitivity of 80% and specificity of 99% at this point. The split-half reliability coefficient and Cronbach's alpha were 90%, and 97%, respectively, in their study.
Procedure
First, all the personnel and attending physicians in the departments and emergency ward of the pediatric hospital attended the training workshops on how to detect cases of child abuse. They were requested to look for such cases in their routine daily activities, and in case of finding a case of child abuse, report the case to psychosocial support units for abused children for further scrutiny (Table 1).
Children referred to the psychosocial support unit were first visited by a psychologist with adequate experience in recognizing abused children.
In the next step, the abused child and his/her parents were examined and visited by a psychiatrist. If the psychiatrist diagnosed that the child or parents required pharmaceutical therapy, the therapeutic intervention was started and the next session for continuation of medical treatment http://mjiri.iums.ac.ir Med J Islam Repub Iran. 2017 (30 Aug); 31.49.
3 was scheduled. Then, the child and parents were referred to a psychologist or a social worker for nonphamaceutical interventions. Nearly all the parents participated in parenting skills training and anger management courses. In addition, the children participated in a training course for learning how to protect themselves from child abuse and to be prepared for everyday life, however, some of the parents were resistant against this activity. Other nonpharmaceutical therapies, specifically social and legal support, were performed based on the requirements of children or their parents. These interventions included counseling services, school counseling, financial support through welfare organizations, or charity foundations, home visit etc. Social workers referred children and their parents to support services if necessary. If the living status of the patients had to be evaluated, or in case of parents not showing up for treatment, home visits were performed. If the child's life was in danger, social workers requested legal support for the child. Parenting skills were taught to parents in 6 sessions. These sessions were held based on the principles of constructive training with the difference that 2 of the 6 sessions were allocated to anger management and discussions on child abuse, physical punishment, and its negative effects on the child.
Data Collection
After establishing a connection with the children and their parents, Child Abuse Questionnaire was filled out based on the collected data (through observing, examining, and interviewing the parents and children). Experience shows that most abusive parents hide abusing their children, thus, for data collection, it is very important for the counselor to establish a connection with them and encourage them to open up. Child's Strengths and Difficulties Questionnaire was then filled out for the child based on the obtained data from the parents. General Health Questionnaire was filled out for the mothers. The participants received services and were followed-up after 3 and 6 months. These questionnaires were completed by the psychologist of the team who had participated in an educational workshop (the workshop on Child Abuse Detection and Acquaintance with the Instruments), and had adequate clinical experience for making a connection with the children and parents.
Study design and objectives were thoroughly explained to the parents and older children (adolescents); and after obtaining written informed consent, they were entered the study.
Data Analysis
Data were analyzed using SPSS 18 software. Demographic characteristics of the participants were expressed as frequency, percentage, mean, and standard deviation. ANOVA was used to assess the effect of interventions on the child's strengths and difficulties and mothers' mental health. To eliminate the possible effects of mother education, this variable was entered into the covariance test as a covariate. Bonferroni post hoc test was then used to assess the differences between the mean values in the 3 stages.
To evaluate the effect of interventions on physical and emotional child abuse (ordinal scale), nonparametric Friedman test was used. P<0.05 was considered statistically significant.
Results
As shown in the participants' flow diagram (Fig. 1), out of 130 cases referred to the center, a total of 78 cases of child abuse were detected; out of which, 10 did not consent to participate in the study. The remaining 68 were entered into the study; and 4 were excluded prior to the first assessment (at 3 months), and 7 were excluded prior to the second assessment (at 6 months). A total of 68 cases were evaluated. Tables 2 and 3 present the demographic characteristics of the children and their parents.
4
Evaluation of child abuse showed that 60 children had experienced at least 1 type of emotional abuse, out of which 8 (11.8%) experienced one, 16 (23.5%) experienced 2-3, and 36 (52.9%) experienced 4 or more types of emotional abuse.
The frequency of comorbidity of physical and emotional abuse at the onset of the study was 34 cases (50 %), which dropped to 2 cases (2.9%) in the first assessment (at 3 months), and to 1 (1.5%) in the second assessment (at 6 months).
Data obtained from the Child's Strengths and Difficulties Questionnaire were analyzed using repeated measures ANOVA. The results revealed that after controlling mother's education, except emotional symptoms and socialization subscales, in others the differences between the first and second and the first and third assessments were statistically significant (Table 4). Thus, evaluating the differences between the mean values in the 3 levels revealed a significant reduction in the mental health score of the mothers (p= 0.001).
Evaluating the changes in child abuse during the interventions using Friedman test indicated that the mean score of physical abuse and emotional abuse during the 3 levels of intervention significantly decreased (p<0.001; Table 5).
Discussion
Evaluation of the efficacy of interventions for reducing most of behavioral problems of children in our study revealed that the intervention program successfully decreased the severity of the problems. In addition, this finding confirms the possible efficacy of family behavioral interventions for decreasing the behavioral and emotional problems of children (38). In another study (39), it was demonstrated that intervention among high risk families increased the productivity of children and resolved their behavioral problems. It could not be clearly specified that whether targeting the parenting style of parents resolved children's problems, or promoting the mental health of parents decreased their negative perception about their children's behavior. Harnett and Dawe (39) believed that 5 empowering parents with emotional adjustment leads to better management of children and teaching child management skills to parents improves their emotional efficiency. In our study, the results of GHQ-28 questionnaire revealed that more than one-third of parents (33.8%) had a score higher than the cut-off point before the intervention; this value significantly decreased at 6 months (7.4%). The study results also showed that the mean rank of emotional and physical abuse during the intervention significantly decreased and this reduction for physical abuse was more significant than the emotional abuse. Mikton and Butchart (40) in their systematic review on interventions done for child abuse, reported that providing education of parents and multi-level programs are more effective for preventing child abuse. Our study as a multilevel intervention, targeted multiple problems and requirements of families by providing medical, psychological, psychiatric, and educational services. Moreover, our study was a family wellness program offering interventions from short-term counseling to parenting classes and, sometimes, home visits for children at risk of abuse (41). These programs are a series of designed services, thus, separate assessment of their individual efficacy is not possible. However, the results of meta-analyses showed that such programs decrease the rate of child abuse (42).
In our study, almost all interventions and services were provided in the psychosocial support unit. Chaffin et al. (43) found that services offered in the unit were more effective than services provided during home visits. On the other hand, those referred to the psychosocial support unit included infants and newborns presenting to the medical center for routine health care services and checkups, therefore, these interventions could be included in the health system services. Dubowitz et al. (44) demonstrated that offering these services to high risk families leads to decreased rate of abuse and severe punishment. Thus, these interventions can be added to the general health interventions. Evidence shows that these interventions, especially for potentially at risk families, significantly decrease substance abuse and related child abuse (45).
Conclusion
Based on the obtained results, it seems that the psychosocial support unit can be a suitable center to provide services for abused children and their families. Although the service package used in our study limited separate evaluation of the efficacy of interventions, it contained all the various required services by the families. A multidisciplinary team is required for providing such services. Moreover, in these interventions, family is considered as a single unit, and in some cases, family members significantly help and participate in the services.
Limitations
Our study had some limitations. The first was that interventions were offered in a service package, thus, we could not separately assess the effectiveness of each intervention. For example, we could not specify whether education of parenting styles to parents helped improve children's problems, or promoting the mental health of parents decreased their negative perception about their children's behaviors. The second limitation of our study similar to others (46) was that the participating families had high rate of tension and resistance, which is a common problem in child abuse studies. Abusive and high risk families are usually not willing to participate in interventional programs and are less likely to enroll in research-based programs (47). Another limitation was the duration of followup of the patients, which was 6 months in our study. It seems that this time was not enough to confirm the continuous efficacy of interventions and could be a limitation for ensuring the stability of results and outcome of this interventional package (39).
Another limitation of the current study was that a psychologist filled out the questionnaires. The main reason for this was further scrutiny and higher accuracy when collecting the data because filling out the questionnaires by the mothers carried the risk of distortion of information. To prevent bias when collecting the data, the mentioned psychologist only had the task to filling out the questionnaire and played no role in providing services. However, it is still considered a limitation of this study.
Absence of a control group was a major limitation of this study. However, we could not design a control group because it was unethical to deprive a group of patients from receiving adequate care. Thus, it was considered as a limitation of this study, and only patients' information was compared before and after the intervention.
|
2018-04-03T00:56:22.487Z
|
2017-08-30T00:00:00.000
|
{
"year": 2017,
"sha1": "26a9b1083815fb9d5feb4b413dbe2a97df7e92c1",
"oa_license": "CCBYNC",
"oa_url": "http://mjiri.iums.ac.ir/files/site1/user_files_e9487e/hajebi_ahmad-A-10-1962-2-71377a5.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "0ca99eec93a309422ff99871e602e7ae1e908b7f",
"s2fieldsofstudy": [
"Psychology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
248213911
|
pes2o/s2orc
|
v3-fos-license
|
Hybrid Invasive Weed Improved Grasshopper Optimization Algorithm for Cloud Load Balancing
In cloud computing, the processes of load balancing and task scheduling are major concerns as they are the primary mechanisms responsible for executing tasks by allocating and utilizing the resources of Virtual Machines (VMs) in a more optimal way. This problem of balancing loads and scheduling tasks in the cloud computing scenario can be categorized as an NP-hard problem. This problem of load balancing needs to be efficiently allocated tasks to VMs and sustain the trade-off among the complete set of VMs. It also needs to maintain equilibrium among VMs with the objective of maximizing throughput with a minimized time span. In this paper, a Hybrid Invasive Weed Improved Grasshopper Optimization Algorithm-based-efficient Load Balancing (HIWIGOA-LB) technique is proposed by adopting the merits of the Invasive Weed Optimization Algorithm (IWOA) into the Grasshopper Optimization Algorithm (GOA) for determining the near-optimal solution that facilitates optimal load balancing. In particular, the random walk strategy is adopted to prevent the local point of optimality problem. It also utilized the strategy of grouping to modify the exploitation coefficient associated with the traditional GOA for balancing the rate of exploration and exploitation. The simulation investigations of the proposed HIWIGOALB scheme confirmed its better performance in minimizing the make span and response time by 13.21% and 16.71%, with a maximized throughput of 19.28%, better than the baseline approaches considered for investigation.
Introduction
In general, cloud computing plays an anchor role as a pervasive and game-changer in most of the operations that involve resource-intensive applications, operating models, collaborative abilities, end-user services, and service provisioning [1]. The primary objective of cloud services is to offer end-users with easy and rapid access to virtual machines [2]. It also focuses on providing a diversified number of distributed services with maximum efficacy [3]. At this juncture, the activity of load balancing the tasks between the VMs is a highly important aspect due to the increasing number of tasks submitted to them in the public cloud [4]. Load balancing also focuses on improving performance with minimized cost and In this paper, the HIWIGOA-LB scheme is proposed for attaining potential load balancing of tasks between virtual machines with the benefits of Improved GOA (IGOA) and Invasive Weed Optimization Algorithm (IWOA) for balancing exploitation and exploration incurred during the mapping process [13]. This proposed HIWIGOA-LB scheme loaded VMs adopted the strategy of random and grouping to handle the issues of local optimality and improve the exploration by incorporating a modified movement coefficient used by the classical GOA [14]. The simulation experiments of the proposed HIWIGOA-LB and baseline schemes were conducted using makespan, resource utilization rate, throughput, and response time with different numbers of cloudlets. The simulation investigations of the proposed HIWIGOA-LB and baseline schemes are also achieved using execution time, makespan, and degree of imbalance based on their impacts before and after the load balancing process [15]. The subsequent sections of the paper are as follows: Section 2 details the review conducted on the existing swarm intelligence-based load balancing approaches that have contributed to the literature over recent years. Section 3 presents the whole view of the propounded HIWIGOA-LB scheme with the primitive algorithms of GOA and IWOA, with this integration and its employment to the problem of load balancing that maps tasks to suitable VMs depending on the degree of utilization. Section 4 illustrates the simulation experimental setup and results investigation of the HIWIGOA-LB scheme and the benchmarked scheme with the justification behind its predominant performance. Section 5 concludes the paper with its main contributions and future scope of improvement.
Related Works
The recently proposed meta-heuristic optimization algorithms-based load balancing schemes are discussed as follows.
Binary Particle Swarm Optimization and Gravitational Search Algorithms (PSOGSA) are used for load balancing and task scheduling in the cloud environment. Binary Load Balancing-Hybrid Grasshopper search agent Swarm Optimization and Gravitational Search Algorithm (Bin-LB-PSOGSA) is a bio-inspired technique that effectively allows scheduling jobs to enhance the balance level of loads on VMs [16]. The method identifies the best task for VM association, which is persuaded by the measurement of allocated workload and VM execution speed. The searching process of Bin-LB-PSOGSA allows the tasks to submit requests to VMs dynamically. Rescheduling of tasks that have been allotted is re-applied. The search space is described as a hypercube. Every mass travel over hypercube nodes by turning over one or several bits of the mass position matrix. Consecutively, the position matrix is binary-coded. In [17], we have developed a new balanced Grasshopper search agent Swarm Optimization based algorithm (BPSO) for load scheduling in the cloud. The proposed BPSO technique has enhanced balancing in the load scheduling process. The method reduced the total transfer time and total stabilization cost. [18] presented a load balancing scheme using the method of simulated annealing (SA). This SA scheme was initially included to solve the issue of hard-combinatorial optimization issues based on the principle of controlled randomization. SA further plays a vital role in the minimization process that is attributed to temperature replication, which plays a dominant role in the field of thermodynamics. This method was proposed for estimating optimal solutions based on the phenomenon of random derivatives.
The main properties of this method concentrate on determining the worst deviation that has the probability of being accepted as a principal solution. Thus, this SA-scheme was optimal over the other searching methodologies since it inherited the potential to prevent an optimal solution from being trapped in the local point of optimality. A Hybrid Pigeon and Harris Hawks Optimization Algorithm-based load balancing approach (HPHHOA-LBS) was proposed for guaranteeing optimal utilization of resources with minimized task response time [19]. This HPHHOA-LBS approach significantly handled the load among different available VMs in a short time compared to the existing algorithms. It was identified to improve the efficiency of the load balancing process to a maximum level of 97.84%, on par with the baseline schemes. Then, a Binary Bird Swarm Optimization Algorithm-based Load Balancing (BBSWOA-LB) technique was proposed for assigning tasks to suitable VMs in the cloud environment [20]. In this BBSWOA-LB technique, the tasks are considered non-preemptive and independent in nature. The experimental investigations of this approach were conducted using the GOCI dataset logged by Goggle during the execution of cloudlets under real-time workloads. It concentrates on improving the overall performance of the system by balancing the entire system with a minimized response time. It derived the merits of the binary Bird Swarm Optimization Algorithm (BSWOA) for determining the underload and overload conditions of VMs with maximized optimality.
A new Artificial Bee Colony Monarchy Butterfly Optimization Algorithm-based Load Balancing (ABCMBOA-LB) scheme was proposed for effective resource utilization with a minimized makespan and maximized throughput [21]. It was proposed with the exploitation capability of MBOA and the exploration potentialities of ABC for potential mapping of tasks into ideal VMs in the cloud computing environment. The simulation experiments of this ABCMBOA-LB scheme confirmed maximized DOI and minimized makespan independent of the number of cloudlets considered for evaluation. An integrated Discrete ABC and Parteo-based load balancing scheme was proposed by [22] for handling the issue of flexible task scheduling through the enforcement of multiple objectives. This integrated Discrete Artificial Bee Colony (ABC) and Parteo-based load balancing approach consists of two components related to scheduling and routing for facilitating effective solutions in the domain with greater efficacy. It uses discrete values for both the utilized scheduling and routing components. A crossover operator was used in this discrete ABC and Pareto scheme to incorporate more potential into the employee bee phase for the purpose of discovering potent information from the scheduling and routing components. It included an exterior Pareto archive group for recording the previously estimated non-dominated solutions and a rapid Pareto set updating procedure.
A Honeybee behavior-based load balancing algorithm was proposed by [23] for sharing incoming requests in the cloud computing environment in an effective way. It focused on maintaining potential balance among the virtual machines with the objective of improving the level of throughput. It can balance the task's priority on the virtual machines such that the cumulative time involved in waiting is predominantly reduced to the maximum level. It also exhibited a notable improvement in the mean execution time, with a minimized waiting time for tasks in the queue. Then, an enhanced ABC-based load balancing mechanism was propounded by [24] to ensure a high degree of load balancing and scheduling process in the clouds. This enhanced ABC-based load balancing scheme concentrated on reducing the make-span of tasks with a minimized number of VM migrations. It categorized the underloaded tasks as the food sources and the number of tasks isolated from the overloaded VMs as the honeybees in the implemented algorithm [25]. The foraging activities of the honeybees are included in the load balancing process to effectively balance the load among the available virtual machines in the cloud environment.
Proposed Hybrid Invasive Weed Improved Grasshopper Optimization Algorithm-based Load
Balancing (HIWIGOA-LB) Scheme The proposed HIWIGOA-LB scheme is propounded as a reliable attempt to facilitate predominant load balancing between virtual machines in clouds based on the benefits of Improved GOA (IGOA) and Invasive Weed Optimization Algorithm (IWOA) for balancing exploitation and exploration. This proposed HIWIGOA-LB scheme concentrates on enhancing the population initialization and exploitation of the search space to enable predominant load balancing between virtual machines in clouds. It included the weighted task scheduling process based on the optimization problem formulated using the parameters of energy makespan, response time, datacenter cost, and degree of imbalance. The proposed HIWIGOA-LB scheme focuses on three vital potentialities that correspond to: (i) the classification of VMs into underloaded and over-loaded groups during the process of load balancing; (ii) the energy minimization of the datacenter for the objective of minimizing the overall incurred cost; and (iii) the identification of the complete set of VMs in the datacenter that are under-utilized or over-utilized for attaining load balancing in an effective manner. It depicts feasible dimensions that might be used for the formulation of upper and lower thresholds that could form an indicator to identify the over-utilization and under-utilization of VMs by the number of tasks entering into the cloud environment.
HIWIGOA-Based Load-Balancing Algorithm
In this section, the detailed view of the HIWIGOA algorithm is initially presented, which is then followed by the proposed load balancing algorithm with an eagle insight into the proposed mechanism.
Standard Grasshopper Optimization Algorithm (GOA)
Eq. (1) is used in the standard GOA to determine how to update the position (SA G i ð Þ ) of the grasshoppers (search agents) to map tasks into suitable VMs.
where S I i ð Þ and G F i ð Þ represent the social interaction and gravity force factors that drive the search agent's position change. Moreover, it depicts the wind advection factor that influences the degree of exploitation or exploration that needs to be performed by the search agent. At this juncture, the factor of social interaction is computed based on Eqs. (2) and (3).
where "a" and "b" represent the adjusting factors that play an anchor role in attaining flexibility of the social parameters. Moreover, the value of d ij representing the euclidean distance with respect to i th and j th search agenst is calculated based on Eq. (4).
Furthermore, the parameters of social interaction and gravity force with respect to the search agent are calculated based on Eqs. (5) and (6).
where, g const and u const represent the gravity and wind power constants, which play an anchor role in impacting the search process of the search agents. Furthermore, d v e g À Á and d v e W ð Þ highlight the vector of gravity and wind force, respectively, emphasizing the significance of the search agent movement updating process.
Furthermore, the search agent updating process presented in Eq. (1) can be improved using Eq. (7) by including the parameters of S I i ð Þ , G F i ð Þ and W A i ð Þ . demonstrated through Eqs. (2), (5), and (6).
where N represents the number of grasshoppers. To apply the GOA to solve the optimization problems, a modified mathematical model can be presented as follows: where, S N highlights the number of search agents (grasshoppers). In this context, the mathematical model is modified for use in this type of optimization, such as the mapping of tasks to appropriate VMs specified in Eq. (9).
where; U d T and L d T depict the upper and lower thresholds of the dimensions "d" considered for exploration, with " d P Best ." being the best position of the search agent during the process of optimization. In addition, the value of is determined based on Eq. (10).
where c Max and c Min represent the minimum and maximum values of the adjusting constant, respectively, and Iter Curr and Iter Max denote the maximum number of iterations and the current implementation iteration.
3.1.2 Inclusion of IWOA for Improving GOA In the GOA algorithm, the objective function value is completely ignored during the movement of each search agent. The search agent (grasshopper) with a better value of objective function necessitates a bigger step to determine the best solutions that can be feasibly identified in the search space. Hence, the capability of exploitation offered by GOA needs significant improvement. Moreover, GOA suffers from the problem of falling into the local point of optimality. To address these two issues, IWOA is included in the GOA algorithm for attaining a better allocation of resources to VMs and balancing the load systematically. This HIWIGOA-LB scheme is proposed for achieving global optimality during the processing of load balancing and task scheduling. This hybridization of IWOA into GOA was attained using a random walk strategy to improve its exploitation potential. This inclusion of IWOA accelerates the convergence rate of GOA and aids in controlling the step movement of the search agents towards the optimal solution. Moreover, the types and steps used for position updating by the HIWIGOA are achieved through the values of objective functions and iterative numbers. In particular, the random way and IWOA algorithms are hybridized first to improve the local search potential. Secondly, the strategy of grouping is adopted for balancing the tradeoff between exploration and exploitation. In addition, Fig. 1 presents the process of hybridizing IWOA with GOA in the HIWIGOA algorithm.
In this context, the search agent (plant) in the IWOA algorithm has the capability of generating more optimal solutions when they possess better objective functions. The possible number of solutions that could be generated by the IWOA algorithm is presented in Eq. (11). (12) where, s Initial and s Final represents the initial and final standard deviation values associated with the generated candidate solution.
Strategy of Random Walk
In this proposed HIWIGOA algorithm, the strategy of random walk is adopted for enhancing the potentiality of exploitation during the process of load balancing tasks over available VMs. In this strategy, new solutions (SA New G i ð Þ ) are determined in a more random manner from the best primary (SA FBest G i ð Þ ) and secondary best solutions (SA SBest G i ð Þ ) as specified in Eq. (13)
Strategy of Grouping
In general, different inertial weights are useful for enhancing the potential of optimization algorithms depending on the contexts in which they are employed. This strategy of grouping is utilized for modifying the coefficient c pertaining to the traditional GOA for balancing the rate of exploration and exploitation. This coefficient c is identified to decrease with an increase in the number of iterations in a more linear manner. It is determined to improve the possibility of improvement. Then, the modified coefficient value is determined based on Eq. (14).
The above-mentioned coefficient, c mod , is utilized for search process optimization. In this strategy of grouping, the complete population of search agents is partitioned into three groups, such as elite, onlooker, and scout agents, depending on the objective function values possessed by them in Eq. (15).
At this juncture, the elite search agents are determined to inherit better objective function, and thereby exhibit small step movement. The onlooker search agent, on the other hand, uses moderate values of objective function and targets in updating its position, similar to the GOA algorithm. In particular, scout search agents play an important role in preventing the problem of local optimality, which is included in the traditional GOA algorithm. In this strategy, the positions of the scout search agents that possess the worst objective function are generated randomly during the first three quarters of the iterations. During the remaining quarter of iterations, the scout search agent plays an indispensable role in improving the capability of exploration. Thus, the coefficient (c mod ) associated with the scout search agent is updated based on Eq. (16).
Finally, this modified (c mod ) is included into Eq. (9) to make it ideal for mapping the incoming tasks to suitable VMs as represented in Eq. (17).
System Model and Assumptions
Furthermore, the load that has the possibility of being assigned to each virtual machine LOAD VM is determined based on the total number of tasks TN Tasks T; t ð Þ and the service rate SR V M i ð Þ ; t À Á of virtual machines at time 't' as represented in Eq. (20) Then, the sum of the task loads assigned to the complete set of virtual machines existing in the cloud environment is presented based on Eq. (21).
Eq. (22) shows the time incurred for processing the tasks submitted to the total number of VMs present in the cloud environment.
Furthermore, the task execution time and processing time of each task assigned to the individual VMs present in the clouds are presented through Eqs. (23) and (24) Thus, the time of finishing an individual task assigned to a specific VM is calculated based on the execution and start time of the tasks incoming to the clouds based on Eq. (25). TF TS n ð Þ ¼ STime TS n ð Þ þ Time Exec (25) At this juncture, the decision variable DS VAR ij ð Þ considered for assigning the incoming tasks to the corresponding VMs is determined using the processing time of tasks as presented in Eq. (26).
In this context, "makespan" refers to the total time incurred for task completion based on the efficient allocation of VMs. This factor of makespan needs to be potentially minimized. Thus, the objective fitness function concentrates on minimizing the makespan of the tasks into the cloud computing environment as specified in Eq. (27).
where, f t ij is the finishing time incurred by a task T S s ð Þ over a virtual machine VM j ð Þ . Furthermore, if the energy consumption incurred by the execution of the task T S s ð Þ over a virtual machine VM j ð Þ is represented as EC Task Thus, the objective function concentrating on energy consumptions is presented in Eq. (29).
In addition, the datacenter cost is the other parameter considered for task scheduling of the VMs depending on their availability as determined by Eq. (30).
where, Cost PERÀUNIT is the cost incurred for utilizing 1 KW of energy by the data center under operation in the cloud environment.
Hence, the objective function concentrates on minimizing datacenter cost in Eq. (31).
Finally, the aforementioned objective functions formulated based on makespan, energy consumption, and data center cost are subjected to the constraints presented in Eqs.
The aforementioned constraints emphasize that only a single task needs to be allocated to each individual VM, the time of executing the task must be lower than the deadline for completing that particular task by the VM, and the calculated standard deviation of load should be lower than the upper value of the threshold in VM allocation. In addition, the process of load balancing also depends on the degree of imbalance as presented in Eq. (35).
where, Max Task s ð Þ , Min Task s ð Þ and Mean Task s ð Þ represents the maximum, minimum and mean number of tasks present in the cloud computing environment.
Implementation Steps of the Proposed HIWIGOA-LB Scheme
The HIWIGOA-LB scheme uses a multi-objective function for deciding about the allocation or reallocation of new tasks or old tasks to an appropriate virtual machine or hosts. This allocation and reallocation depend on the primitive constraints that emphasize that the load of the VMs should be greater than the upper limit value after the task has been assigned to them. Then, the constraint of deadline is considered when there is a huge amount of availability in the VMs. Moreover, the migration of tasks from a heavily loaded VM to a lightly loaded VM is imperative depending on the required deadline or completion time of the tasks. In this context, the VM with the minimum value of the greater deadline task is selected when the completion time of the incoming task or re-allocating task is high. In contrast, VMs with higher and moderate deadline tasks are selected when the completion time of the incoming task or re-allocating task is moderate. Further, the grouping of virtual machines is completely based on the existing load of the virtual machine. In this proposed scheme, two categories of groups are formed and designated as under-loaded and over-loaded VM groups based on the estimation of objective function. The VMs that are present in the overloaded VM groups are made to remove the tasks and wait until they are capable of identifying a potential VM for allocation in the next iteration. The VMs existing in the under-loaded groups are allocated to waiting tasks or tasks that need to be reallocated. This process of removing the tasks from the overloaded VM groups is continued unless the number of underloaded VMs is null. In this context, the solutions (hosts or VMs to be allocated) represent the grasshopper search agents, which are generated based on the principle of randomness. This proposed HIWIGOA-LB scheme utilizes the benefits of the Pareto ranking scheme for handling the issues of multi-objective optimization problems involved in allocating tasks to the VMs based on their availability quantified in terms of underloaded and over-loaded conditions. It also stores the non-dominating solutions generated previously based on storing the best solutions history determined by the grasshopper search agent. Thus, the aforementioned assisted in the potential allocation of tasks into the VMs based on their over-allocation and under-allocation constraints that play a vital role in the load balancing process.
Simulation Results and Discussion
In this section, the experimental setup used for conducting the experimental investigation of the proposed HIWIGOA-LB scheme and the benchmarked BBSWOA-LB (Binary Bird Swarm Optimization Algorithm-based Load Balancing), Adaptive Grasshopper search agent Swarm Optimization Algorithmbased Load Balancing (APSOA-LB), Honey Bee Optimization Algorithm-based Load Balancing (HBOA-LB) and Artificial Bee Colony Monarchy Butterfly Optimization Algorithm-based Load Balancing (ABCMBOA-LB) are presented. Then, the different tests of validation conducted for quantifying the potential of the proposed HIWIGOA-LB scheme over the benchmarked schemes are also demonstrated.
Experimental Setup
The implementation of the proposed HIWIGOA-LB scheme is achieved using the CloudSim toolkit classes, which are extended for modeling and simulating the environment of the cloud. The CloudSim simulator aided in constructing a virtualized environment that facilitates on-demand provisioning. It is used for modeling, simulating, and experimenting with cloud applications and services. Ten feasible solutions were defined for the algorithm during the experimental process, with the maximum number of iterations assigned to 100 iterations. In addition, the parameters considered during the implementation of the proposed HIWIGOA-LB scheme are depicted in "Tab. 1". In this experimental investigation, the proposed HIWIGOA-LB scheme and the benchmarked schemes were evaluated with respect to makespan, degree of balance, DOI, and execution time realized before and after the load balancing process under the number of cloudlets set to 500. Fig. 2 presents the makespan achieved by the proposed HIWIGOA-LB and the baseline BBSWOA-LB, APSOA-LB, HBOA-LB, and ABCMBOA-LB approaches.
The results proved that the makespan guaranteed by the proposed HIWIGOA-LB scheme after load balancing (12 s) is comparatively better than the makespan (5 s) ensured by it before load balancing. The deviation in the time realized during the implementation of the HIWIGOA-LB scheme is 7 s. But, the baseline BBSWOA-LB, APSOA-LB, HBOA-LB, and ABCMBOA-LB approaches facilitated a gap of only 4 s, 3 s, and 3 s, respectively. This potential minimization in the makespan confirmed by the proposed HIWIGOA-LB scheme is mainly due to the utilization of the random strategy of IWOA into GOA that is attributed to a better balance between the exploitation and exploration rate. Thus, the proposed HIWIGOA-LB scheme minimized the makespan before and after load balancing by 19.56%, which is comparatively better than the minimized makespan of 13.21%, 11.98%, 10.52%, and 9.64%, facilitated by the baseline BBSWOA-LB, APSOA-LB, HBOA-LB, and ABCMBOA-LB approaches. Furthermore, Fig. 3 depicts the degree of imbalance (DOI) achieved by the proposed HIWIGOA-LB and the baseline BBSWOA-LB, APSOA-LB, HBOA-LB, and ABCMBOA-LB approaches. The DOI attained by the proposed HIWIGOA-LB scheme after load balancing is identified to be 15%, which is an improvement over the DOI of 24 ensured by it before load balancing. Hence, the deviation in DOI visualized during the implementation of the HIWIGOA-LB scheme is 9%. However, the benchmarked BBSWOA-LB, APSOA-LB, HBOA-LB, and ABCMBOA-LB approaches only guaranteed a deviation of 4%, 5%, 5%, and 3%, respectively. This predominant minimization in DOI is attained by the proposed HIWIGOA-LB scheme mainly due to the constraints that are enforced during the process of identifying the status of under and over-utilization of VMs, such that the incoming tasks may be optimally allocated to suitable VMs. Thus, the proposed HIWIGOA-LB scheme minimized the DOI before and after load balancing by 22.18%, which is comparatively better than the minimized DOI of 15.41%, 12.38%, 11.94%, and 10.72%, facilitated by the baseline BBSWOA-LB, APSOA-LB, HBOA-LB, and ABCMBOA-LB approaches. Moreover, Fig. 4 demonstrates the execution time incurred by the proposed HIWIGOA-LB scheme and the baseline BBSWOA-LB, APSOA-LB, HBOA-LB, and ABCMBOA-LB approaches with respect to makespan and resource utilization with the number of cloudlets set to 500. The execution time incurred for makespan and resource utilization by the proposed HIWIGOA-LB scheme is highly minimized and on par with the benchmarked load balancing approaches. This minimization in execution time by the proposed HIWIGOA-LB scheme is mainly due to the adoption of a random walk of IWOA into GOA with a sustained balance between exploration and exploration. It also handled the problem of local point of optimality and thereby improved the execution rate compared to the existing schemes used for investigation. With respect to makespan, the execution time incurred by the proposed HIWIGOA-LB scheme is minimized by 14.21%, which is comparatively better than the execution time minimization of 10.86%, 9.42%, 8.23%, and 7.92%, ensured by the baseline BBSWOA-LB, APSOA-LB, HBOA-LB, and ABCMBOA-LB approaches. On the other hand, the proposed HIWIGOA-LB scheme with respect to resource utilization minimized the execution time by 17.32%, which is superior on par with 13.28%, 12.42%, 11.06%, and 10.64%, guaranteed by the benchmarked approaches. In this experimental investigation, the proposed HIWIGOA-LB scheme and the benchmarked schemes are evaluated based on mean throughput, response time, resource utilization rate, and degree of imbalance (DOI) with different numbers of cloudlets. Figs. 5 and 6 present the performance of the proposed HIWIGOA-LB scheme and the baseline BBSWOA-LB, APSOA-LB, HBOA-LB, and ABCMBOA-LB approaches evaluated based on the parameters of mean throughput and response time with different numbers of cloudlets. The response time is identified to be remarkably minimized by the proposed HIWIGOA-LB scheme as the parameters considered for optimization are globally optimized during the process of task scheduling. Further, the mean throughput is significantly enhanced by the proposed HIWIGOA-LB scheme as it adopts the grouping strategy to perform a balanced local and global optimization process. Thus, the results proved that the mean throughput achieved by the proposed HIWIGOA-LB scheme with different numbers of cloudlets is improved on an average by 14.56%, 12.39%, 11.85%, and 10.21%, better than the benchmarked approaches. Moreover, the proposed HIWIGOA-LB scheme with respect to response time is minimized by 13.96%, 11.24%, 10.78%, and 9.12%, better than the benchmarked BBSWOA-LB, APSOA-LB, HBOA-LB, and ABCMBOA-LB approaches. . 7 and 8 demonstrate the performance of the proposed HIWIGOA-LB scheme and the baseline BBSWOA-LB, APSOA-LB, HBOA-LB, and ABCMBOA-LB approaches evaluated based on the parameters of resource utilization rate and DOI with different numbers of cloudlets. This significant performance of the proposed HIWIGOA-LB scheme in terms of resource utilization rate and DOI is realized mainly due to the adoption of a grouping strategy that played an anchor role in determining a better optimal solution in the search space. The results proved that the resource utilization rate guaranteed by the proposed HIWIGOA-LB scheme with different numbers of cloudlets is improved on an average by 17.65%, 15.21%, 13.29%, and 11.84%, better than the benchmarked approaches. Moreover, the proposed HIWIGOA-LB scheme with respect to DOI is reduced by 16.24%, 14.29%, 12.89%, and 10.43%, which is superior to the benchmarked BBSWOA-LB, APSOA-LB, HBOA-LB, and ABCMBOA-LB approaches.
Conclusion
The proposed HIWIGOA-LB scheme achieves potential load balancing and task scheduling by optimally managing the degree of exploration and exploitation in a more balanced manner through the adoption of IWOA and GOA algorithms. It integrated IWOA and GOA using the strategy of random walk and grouping to improve the exploration capability during optimization and improve the exploitation by preventing the problem of local optimality, in which most optimization algorithms get stuck in local solutions. The simulation results of the proposed HIWIGOA-LB scheme confirmed a 21% with respect to makespan, which is comparatively better than the execution time minimization of 10.86%, 9.42%, 8.23%, and 7.92%, ensured by the baseline BBSWOA-LB, APSOA-LB, HBOA-LB, and ABCMBOA-LB approaches. In addition, the results also confirmed that the resource utilization rate guaranteed by the proposed HIWIGOA-LB scheme with different numbers of cloudlets is improved on an average by 17.65%, 15.21%, 13.29%, and 11.84%, better than the benchmarked approaches. As a part of the future scope, it has been decided to formulate a binary spider monkey optimization algorithm and compare it with the proposed HIWIGOA-LB scheme with different conditions of experimentation.
Funding Statement: The authors received no specific funding for this study.
Conflicts of Interest: The authors declare that they have no conflicts of interest to report regarding the present study.
|
2022-04-17T15:04:55.368Z
|
2022-01-01T00:00:00.000
|
{
"year": 2022,
"sha1": "08935d4ef2933af1bfad96e023ee7514d5e153fb",
"oa_license": "CCBY",
"oa_url": "https://www.techscience.com/iasc/v34n1/47375/pdf",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "c5981ac961ae6f15da20d9000dc30c21ce2745a2",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": []
}
|
26512885
|
pes2o/s2orc
|
v3-fos-license
|
Extinction time for the contact process on general graphs
We consider the contact process on finite and connected graphs and study the behavior of the extinction time, that is, the amount of time that it takes for the infection to disappear in the process started from full occupancy. We prove, without any restriction on the graph $G$, that if the infection rate $\lambda$ is larger than the critical rate of the one-dimensional process, then the extinction time grows faster than $\exp\{|G|/(\log|G|)^\kappa\}$ for any constant $\kappa>1$, where $|G|$ denotes the number of vertices of $G$. Also for general graphs, we show that the extinction time divided by its expectation converges in distribution, as the number of vertices tends to infinity, to the exponential distribution with parameter 1. These results complement earlier work of Mountford, Mourrat, Valesin and Yao, in which only graphs of bounded degrees were considered, and the extinction time was shown to grow exponentially in $n$; here we also provide a simpler proof of this fact.
Introduction
The contact process (ξ t ) t≥0 with infection rate λ on a graph G = (V, E) is the Markov process on the space {0, 1} V and generator given, for any cylindrical function f , by where y ∼ x means that x and y are neighbors and ξ z←i , for z ∈ V and i ∈ {0, 1}, is the configuration defined by ξ z←i (z) = i and ξ z←i (x) = ξ(x) for any x = z. Vertices of the graph are interpreted as individuals in a population; each individual can be healthy (state 0) or infected (state 1). The above generator prescribes that infected individuals become healthy with rate 1 and transmit the infection to each neighbor with rate λ.
We denote by 0 and 1 the elements of {0, 1} V that are identically equal to 0 and 1, respectively. Inspecting the above generator shows that 0 is an absorbing state for the dynamics. Let x ∈ V and A ⊆ V ; we denote by (ξ x t ), (ξ A t ) and (ξ 1 t ) the process started from 1 {x} , 1 A and 1, respectively (1 is the indicator function). We also denote by P λ a probability measure under which the contact process with rate λ is defined on the graph G (which will be clear from the context, as will the initial configuration of the process); later we will fix λ and omit it from the notation as well. We denote by E λ , or sometimes simply E, the associated expectation.
In [15], the reader can find a thorough introduction to the contact process. For the sake of the remainder of this introduction, let us say a few words about its phase transition, starting with the case G = Z d , the d-dimensional integer lattice. Define the following survival events: S global := {ξ 0 t = 0 for all t} ⊇ {for all t 0 there exists t 1 > t 0 : ξ 0 t 1 (0) = 1} =: S local .
Then, there exists λ c = λ c (Z d ) > 0 so that: if λ ≤ λ c , then P λ [S global ] = 0 and if λ > λ c , then P λ [S global ] > 0 and P λ [S local | S global ] = 1. Now take G = T d , the infinite regular tree of degree d ≥ 3, fix a root vertex and denote it by 0, and take the same survival events as defined above. Then, there exist λ (1) c = λ (2) c (T d ) so that 0 < λ (1) c < λ (2) c < ∞ and: if λ ≤ λ (1) c , then P λ [S global ] = 0; if λ (1) c < λ ≤ λ (2) c , then P λ [S global ] > 0 and P λ [S local ] = 0; if λ > λ (2) c , then P λ [S global ] > 0 and P λ [S local | S global ] = 1. In case G is a finite graph, we have P λ [S global ] = P λ [S local ] = 0, since the process is then a continuous-time Markov chain with a finite state space and the trap 0 can be reached from any other configuration; in particular the extinction time is necessarily finite. Hence, on finite graphs there can be no phase transition in the sense presented in the previous paragraph. Still, one can study the dependence of the process on the value of λ, and in some cases make sense of a finite-volume phase transition. This project typically goes as follows: one fixes λ > 0 and some sequence of graphs (G n ) n≥1 (usually converging or increasing, in some sense, to an infinite graph, or belonging to some class of random graphs), and then studies the asymptotic behavior of the random variables τ Gn , including their dependence on λ. This has been carried out in the case of boxes of Z d ( [4], [21], [9], [6], [11], [16], [17]), finite homogeneous trees ( [22], [7]), the configuration model ( [5], [19], [3], [20], [13]) and the preferential attachment graph ( [1], [2]).
These are successful case studies, but they of course depend on exploring the structure of the graphs under consideration and sometimes their relation to some infinite (possibly random) graph. In contrast, one may wonder if there are results that are context-free, that is, that hold for arbitrary sequences of graphs. Indeed, the following facts have been established. Given a graph G, let |G| denote its number of vertices. Theorem 1.1 (i) [20] For any d ∈ N and λ < λ (1) c (T d ) there exists C > 0 such that, for any graph G with degree bounded by d and at least two vertices, (ii) [18] For any d ∈ N and λ > λ c (Z) there exists c > 0 such that, for any connected graph G with degree bounded by d and at least two vertices, Our motivation in this paper is to improve the second part of Theorem 1.1. With the generality that the result is stated, the restriction that λ > λ c (Z) cannot be relaxed: the class of graphs under consideration includes line segments of Z and for those, the extinction time grows logarithmically with the number of vertices when λ < λ c (Z). In contrast, the requirement that the degree be bounded seems unnecessary: if vertices of larger and larger degree are present, this should only contribute to the extinction time being larger. The reason this requirement was present in [18] was a technical convenience: it allowed for the application of a certain lemma concerning the splitting of trees into large subtrees (this lemma is reproduced here: see Lemma 2.2 below). Our main result is: For any λ > λ c (Z) and any ε > 0, there exists a constant c ε such that for any connected graph G with at least two vertices, and, for any non-empty A ⊆ G, This theorem, as well as the second part of Theorem 1.1, imply that any sequence of graphs has a "supercritical phase", which contains the parameter values λ ∈ (λ c (Z), ∞). This is certainly informative, but in many specific cases λ c (Z) is not the optimal threshold; for example, if G n is given by increasing boxes of Z d with d large, then the extinction time grows exponentially if λ > λ c (Z d ), which is smaller than λ c (Z). More drastically, in some graphs with unbounded degree, such as the configuration model with power law degree distribution or the preferential attachment graph, the extinction time grows as an exponential (or at least stretched exponential) function of |G n | for any positive λ.
In spite of not directly giving the optimal rate in specific cases, Theorem 1.1 (ii) and Theorem 1.2 can be useful in the process of obtaining the optimal rate. For one thing, our proof of Theorem 1.2 is versatile in that it relies on quite useful inequalities and simple methods and could easily be adapted to other contexts (see below for a discussion of our strategy of proof). In addition, lower bounds on the extinction time often follow from some type of coarse graining or renormalization procedure in which, by partitioning space and time into large units, one obtains a new version of the process, in which a notion of infection rate can also be made precise and can often be made as large as desired. An instance of this is found in [18], where Theorem 1.1 is used in the treatment of the contact process on a graph given by the configuration model with a power law degree distribution. We also prove: For any λ > λ c (Z) and any sequence of graphs (G n ) n∈N with |G n | → ∞ as n → ∞, This is a generalization of Theorem 1.2 of [18], which is the same statement with a bounded degree assumption. Let us make some comments on the proofs of these results now. Our main tool is a completely general coupling result, Proposition 2.7, which shows that on any graph, if the process starting from a single vertex survives for a time comparable to the size of the graph, then with high probability it couples with (meaning that it is equal to) the process starting from full occupancy. It is well-known that this, together with a mild lower bound on the extinction time, already implies Theorem 1.3. Another important consequence is Proposition 2.9 which asserts that for any decomposition of a graph into disjoint components (or subgraphs), the mean extinction time on the original graph is larger than the product of the mean extinction times on these subgraphs, up to some correction term. This term remains negligible as long as the number of components in the decomposition is not too large. Such a result is of course particularly well suited for proofs going by induction on the size of the graph, specially for proving exponential (or almost exponential) lower bounds, in virtue of the formula exp(x + y) = exp(x) exp(y). With Proposition 2.9 at hand, we prove Theorem 1.2 and also give a new proof of Theorem 1.1 (ii), simpler than the one in [18]. Since in Theorem 1.1 (ii) it is assumed that the degrees are bounded, one only needs to split the graph in a bounded number of pieces, independently of the size of the graph, so that the correction term in Proposition 2.9 causes no problem, and we get a true exponential lower bound (a similar proof was used in [7] in the setting of finite regular trees). However, for general graphs, the number of pieces required in the decomposition might be very large, making the correction term explode, and this explains why we have the logarithmic term in Theorem 1.2. Now the paper is organized as follows. Section 2 contains all the material preparing to the proofs of the main results. In particular in Subsection 2.1 we recall some standard definitions and fix some notation. In subsection 2.2, we give some basic tools, among which some preliminary estimates for the contact process on a line segment and a star graph. In Subsection 2.3 we state and prove the main tools discussed above, namely the coupling result, Proposition 2.7, and Proposition 2.9. Then Section 3 contains the actual proofs of the main results. It is organized as follows. We first give in Subsection 3.1 a mild polynomial lower bound. As we already mentioned, together with the coupling lemma, this implies Theorem 1.3: we explain this in slightly more details in Subsection 3.2. In Subsection 3.3 we prove a stretched exponential lower bound, which is a necessary intermediate step toward the proof of Theorem 1.2. In Subsection 3.4 we explain how one can also deduce Theorem 1.1 (ii), by using induction on the size of the graph. Finally the full proof of Theorem 1.2 is given in Subsections 3.5 and 3.6 where we put all pieces together.
Preliminary results and tools 2.1 Notation and definitions
A graph will be understood as a set V of vertices and a set E ⊆ {{x, y} ⊆ V : x = y} of edges. Thus, for convenience we will not explicitly treat graphs with loops (edges that start and end at the same vertex) and parallel edges between vertices, though one can define the contact process on those graphs as well and our results could then be readily adapted. The graphs we consider will always be connected. We denote by |G| the number of vertices of G; for a set A, we denote by |A| the number of elements of A. We will often abuse notation and identify a graph with its set of vertices; for example, we may write {0, 1} G in place of {0, 1} V . Remark 2.1 For several of our results, it is sufficient to give a proof for trees only. For example, if Theorem 1.2 is proved for trees and we then consider a general graph G, we can apply the result to an arbitrary spanning tree T of G and observe that the contact process on T is dominated (in the natural stochastic order of configurations) by the contact process on G, hence the extinction time of the latter is larger. We will not repeat this sufficiency in every situation in which it applies.
From now on, we fix a value λ > λ c (Z) and will omit it from the notation. In particular, many of the constants we define below may depend on λ. In order to fix notation, we will quickly go over the very well-known graphical construction of the contact process. Fixing G = (V, E), we take a family of independent Poisson point processes on [0, ∞), Such a family is called a Harris system. We view each of these processes as a random discrete subset of [0, ∞). Arrivals of the processes (D x ) are called recovery marks, and arrivals of the processes (D (x,y) ) are called transmissions. Given x, y ∈ V and 0 ≤ s < t, an infection path from (x, s) to (y, t) is a function γ : [s, t] → V such that γ(s) = x, γ(t) = y, s / ∈ D γ(s) for all s and s ∈ D (γ(s−),γ(s)) whenever γ(s−) = γ(s).
If such a path exists, we say (x, s) and (y, t) are connected by an infection path, and write (x, s) ↔ (y, t). We convention to put (x, s) ↔ (x, s).
we obtain a Markov process (ξ A t ) t≥0 with ξ A 0 = 1 A and the same distribution as the process given by the generator (1.1). We will always assume that the contact process is constructed in this way.
As mentioned in the introduction, we denote by 0 and 1 the configurations which are identically 0 and 1, respectively, and define the extinction time τ G = inf{t : ξ 1 t = 0}.
Some preliminary results about graphs and the contact process
We will now state a few results concerning graphs and the contact process on line segments and star graphs. These results will be the basic tools in our proofs.
The first two results are not new, but for the sake of completeness we sketch their proof, as they are short and elementary.
If T is a tree of size n in which all vertices have degree bounded by d, then there exists an edge whose removal separates T into two subtrees T 1 and T 2 both of size at least ⌊n/d⌋. (ii) If T is a tree of size n, T has a vertex x such that the subgraphs attached to x all have size smaller than or equal to |T |/2.
Proof.
To prove (i), suppose the result is not true for some tree T . Consider an edge {x, y}, whose removal separates T into two subtrees T x and T y , attached respectively to x and y, with the largest one being of minimal size among all edges of T . Assume for instance that |T x | ≥ |T y |. Our starting hypothesis on T implies then that |T y | ≤ ⌊n/d⌋ − 1. Moreover, by definition of the edge {x, y} all subtrees attached to x must have size bounded by n/2, and thus even by ⌊n/d⌋ − 1, using again our hypothesis on T . But since x is of degree smaller than d, we deduce that n = |T y | + |T x | ≤ (⌊n/d⌋ − 1) + 1 + (d − 1)(⌊n/d⌋ − 1) < n, and a contradiction.
For (ii), choose any vertex in T , and call it x 0 . If (by chance) all the subgraphs attached to x 0 have size bounded by |T |/2, there is nothing more to do. If not, one of them, call it T 1 , has size larger than |T |/2. Call x 1 the only neighbor of x 0 in T 1 . If all subgraphs attached to x 1 have size bounded by |T |/2, we are done, and if not one of them, say T 2 , has size larger than |T |/2. Then the only thing to observe is that it cannot be the component containing x 0 , as this one has size |T \T 1 |, which by definition of T 1 is smaller than |T |/2. Therefore, if we call x 2 the only neighbor of x 1 in T 2 , we have x 2 = x 0 . Now we can continue like this, defining a sequence of vertices (x i ), until we find a convenient vertex, and this has to happen, since the (x i ) are all distinct and the graph is finite. [18]) For any graph G, for all t ≥ 0. (2.1) (ii) For any graph G with n vertices and m edges, Proof. (sketch) The first statement follows from the fact that, for any t > 0, by attractiveness of the contact process, τ G is stochastically dominated by t · Y , where Y is a random variable with geometric distribution with parameter P[τ G ≤ t]. The second statement follows from observing that, in each unit time interval, with probability e −n−2λm there is a recovery mark in each vertex of G and no transmission along any of the edges of E.
The next two lemma are part of the folklore now. In particular Lemma 2.4 was already used in [18] (see Proposition 2.1 thereof), but without a full proof, so for convenience of the reader we provide one in the appendix.
There exists a constant c line > 0, such that for any n, the contact process on the line segment {0, . . . , n} satisfies: There exists a constant c star > 0 such that, for any n ≥ 2, the contact process on the star graph S n of size n satisfies: Let F be either a line segment or a star of size n. We say that F is lit in configuration ξ ∈ {0, 1} F , or simply that ξ is lit, if with c 0 = min(c line , c star )/3. The previous results imply the following: Corollary 2.6 Let F be either a line segment or a star graph of size n. Then (i) The fully occupied configuration 1 is always lit.
. Then for any x, Proof. Part (i) is a direct consequence of Lemma 2.3 (i), (2.5) and (2.8). For the second part, assume that F is lit in some configuration ξ, and denote by A the set of configurations which are not lit. Note first that where for the last inequality we have used (2.4) and (2.7) for the second term and the definition of a lit configuration for the last term. Now by using Lemma 2.3 (i) and the Markov property, we get The result follows by combining (2.11) and (2.12). For Part (iii), note first that if F is a line segment, then
A coupling result and consequences
The next proposition is the coupling result discussed already in the introduction.
Proposition 2.7 There exists c coup > 0, such that for any n ≥ 2 and any tree G with n vertices, 3 for all t ≥ 0 and A = ∅.
This is an immediate consequence of the following lemma.
Lemma 2.8 There exists c 1 < 1 such that, for any tree G with n vertices and any t ≥ n(log n) 3 , Proof. It is sufficient to find c 1 such that (2.13) holds for n large enough, as we can then make c 1 approach 1, if necessary, to take care of the remaining values of n.
If |G| = n, then G necessarily has a subgraph G 0 which is either a star or a line segment and satisfies |G 0 | ≥ max log n, diam(G) .
Letc = c 0 ·c 0 , and Fix an arbitrary nonempty subset A of G. Note that, by (2.3) and (2.10), when n is large enough. Then, by the Markov property, we also have By definition of being lit for a configuration, we then get when n is large enough.
Let now K = ⌊(log n) 2 ⌋ and define the times Define also the events For any x ∈ G and k 1 , . . . , k m with 0 ≤ k 1 < · · · < k m ≤ K − 1, we have Iterating, we get Then by (2.4) and (2.7), we get Now defining we see that (2.15), (2.16) and (2.17) imply that there exists some n 0 ∈ N such that, if n ≥ n 0 , We claim now that, for any A = ∅, Indeed, assume that the event on the left-hand side occurs and ξ A s K = 0. Then, there exists x ∈ A such that ξ x s K = 0. Fix y such that ξ 1 s K (y) = 1, that is, G × {0} ↔ (y, s K ). Since by assumption there exists k * such that E x k * ,Ê y k * and E G 0 k * all occur. We then have, for some x ′ , x ′′ , y ′ , y ′′ ∈ G 0 , Finally, to obtain the expression (2.13), note that for n large enough we have n(log n) 3 > s K , so that, for any t ≥ n(log n) 3 , We now give an important application of Proposition 2.7, which says that whenever we cut a tree into disjoints connected subtrees, a lower bound on the mean extinction time on the original tree is obtained by taking the product of the mean extinction times on the subtrees, up to some correction factor. The latter is negligible as long as the number of pieces in the decomposition of the tree is not too large. Note that a similar result was proved in [7].
It is readily seen that
By (2.1) and Proposition 2.7, Then, for any t > s, ≤ e (2λ+1)|G| , we see that the right-hand side of (2.21) is smaller than 1/2 when |G| is large enough. This proves the result for |G| large enough, with c split = 1/2. We can then reduce the value of c split to take care of the remaining cases.
We will encounter situations in which the above proposition is not useful because the sets G 1 , . . . , G N are too small compared to G, so that the denominator on the right-hand side of (2.18) is too large compared to the numerator. In case we can guarantee that the distances between the G i 's are not too large, the following can then be valuable.
Proposition 2.11
If G is a tree containing N disjoint connected subtrees G 1 , . . . , G N and 0 < s < t, Proof. For each distinct i and j, define G i,j as the connected graph obtained as the union of G i , G j and the shortest path between G i and G j . Note that |G i,j | = σ i,j . For each k ∈ {0, 1, . . .}, define E k exactly as in the proof of Proposition 2.9, and definẽ so the desired inequality follows from bounding as in (2.19) and (2.20).
Level 1: a polynomial lower bound
Proposition 3.1 There exists n 1 ∈ N such that, if n ≥ n 1 and G is a tree with n vertices, then E [τ G ] ≥ n 12 .
Proof. Let C = 4/c 0 . If G contains a star graph or a line segment of size larger than C log n, then (2.5) and (2.8) imply that E[τ G ] ≥ n 12 .
Assume that both the maximum degree and the diameter of G are smaller than C log n. Using Lemma 2.2, we can find two disjoint connected subgraphs H 1 , H ′ 1 so that (assuming n is large enough). Since each H i has both maximum degree and diameter smaller than C log n, we can find subgraphs G i ⊆ H i of size ⌊ √ log n⌋ which are either stars or line segments. By (2.5) and (2.8), we have We now want to apply Proposition 2.11 to G and its subgraphs G 1 , . . . , G N . Letting σ i,j be as in (2.11), we have so, letting s = (log n) 3 and t = 2n 12 and using (3.1), (3.2) and (3.3), the right-hand side of (2.22) is smaller than which is in turn smaller than 1/2 when n is large enough.
Proof of Theorem 1.3
According to Lemma A.1 in [18] and Lemma 2.3, all we have to prove is that there exists a sequence (a n ) such that a n = o(E[τ Gn ]) and for any v ∈ G, But this readily follows from Propositions 2.7 and 3.1.
Level 2: a stretched exponential lower bound with exponent 1/3
Proposition 3.2 There exists n 2 ∈ N such that, if n ≥ n 2 and G is a tree with n vertices, then E[τ G ] > exp{c 0 · n 1/3 }, where c 0 is as in Corollary 2.6.
Proof. Let N = ⌊n 1/3 ⌋. If G contains a subgraph with more than N vertices which is either a star graph or a line segment, then (2.5) and (2.8) give the desired result. Now assume that the maximum degree and diameter of G are both bounded by N ; we can then repeatedly split G using Lemma 2.2 and obtain disjoint connected subgraphs G 1 , . . . , G N , all with at least N vertices. If n is large enough that N is larger than the constant n 1 of Proposition 3.1, we have E[τ G i ] ≥ |G i | 12 ≥ n 4 /2 for each i. Then, by Proposition 2.9, c split 2 2n 1/3 · n 3n 1/3 +3 · (n 4 /2) n 1/3 > e n 1/3 , if n is large enough.
A new proof of Theorem 1.1 (ii)
In this subsection we fix some integer d ≥ 1, and only consider graphs (in fact trees) with maximal degree bounded by d. Set for r ≥ 2 All we have to prove is that α r is bounded away from zero for r large enough. So let r ≥ 2 be given, and consider some graph G with 2 r < |G| ≤ 2 r+1 . By using Lemma 2.2, we can split G in at most d + 1 disjoint connected subgraphs of size at most 2 r , at least if r is large enough. So we can assume that there is a decomposition of G as with N ≤ d + 1 and |G i | ≤ 2 r , for all i. Then by using Lemma 2.9, we deduce that there exists a constant C > 0 such that Since this holds for any G with size bounded by 2 r+1 , we get the important relation: It follows by induction that for any r 0 , for some constant C ′ > 0. Moreover, Proposition 3.2 shows that for r large enough. By combining (3.4) and (3.5), we see that there exists r 0 such that α r ≥ α r 0 /2 for all r ≥ r 0 , proving Theorem 1.1.
Level 3: an exponential bound with a logarithmic correction
Proposition 3.3 There exists n 3 ∈ N such that, if n ≥ n 3 and G is a tree with n vertices, then E[τ G ] ≥ exp{n/(log n) 10 }.
Proof. For any tree G let β(G) = log E[τ G ] |G|/(log |G|) 10 ; then let We will be done once we prove that this sequence is bounded below by a positive constant. We start with the following claim: Claim 3.4 For any A > 0 and any tree G at least one of the following statements holds true: • G has a vertex of degree at least |G|/(log |G|) 10 ; • there exist disjoint connected subtrees G 1 , . . . , G N ⊆ G so that |G i | ≥ 1 4 (log |G|) 10 for each i and N ≥ |G| 4A(log |G|) 13 ; • there exists a decomposition G = G 1 ∪ · · · ∪ G N of G into disjoint connected subtrees with |G i | ≤ |G|/2 for each i and N ≤ |G| A(log |G|) 13 .
Proof. Let G be a tree with degrees bounded above by |G|/(log |G|) 10 . By the second part of Lemma 2.2 there exists a decomposition of G as a disjoint union of connected subgraphs: with |H i | ≤ |G|/2 for all i. Define Note that We also observe that |I| < |G| A(log |G|) 13 (3.8) and moreover, if (3.6) holds, then |J | ≥ |G| 4A(log |G|) 13 . (3.9) The second case in the statement of the lemma corresponds to (3.6); the graphs G 1 , . . . , G N are simply the H i 's for which i ∈ J (and use (3.9)). The third case corresponds to (3.7); we let G 1 , . . . , G N −1 be the H i 's for which i ∈ I and G N = {x} ∪ (∪ i∈I c H i ); then use (3.8).
Claim 3.5 There exists n * ∈ N such that, if G is a tree with |G| ≥ n * , then (a) if G has a vertex of degree larger than |G|/(log |G|) 10 , then β(G) ≥ c star /2; (c) if there exist disjoint and connected G 1 , . . . , Proof. Part (a) follows from (2.8).
To obtain (b), assume that |G| is large enough that 1 4 (log |G|) 10 > n 2 , where n 2 is the constant of Proposition 3.2, so that E [τ G i ] ≥ exp{c 0 ·( 1 4 (log |G|) 10 ) 1/3 } for each i. Then, by Proposition 2.9 we obtain if |G| is large enough. The desired estimate now follows by taking the log and dividing by |G|/(log |G|) 10 .
Finally, for (c), using Proposition 2.9 we obtain: and the desired inequality follows by dividing by |G|/(log |G|) 10 . This completes the proof of Claim 3.5. Now fix r 0 large enough that 2 r 0 > n * and r 0 > 64 (log 2) 2 . (3.10) Then fix A > 0 large enough that From Claims 3.4 and 3.5 and the facts that 1 4A < cstar 2 and log |G| = log 2 · log 2 |G| we obtain the key inequality Recall from (3.11) that β r 0 > 1 4A ; define If r 1 = ∞, then the sequence (β r ) is bounded from below by 1 4A and we are done. Otherwise, we have β r 1 ≥ 1 4A and β r < 1 4A for all r > r 1 , so Using this recursively, for all r > r 1 we have completing the proof.
Proof of Theorem 1.2
Proof of (1.2). The proof will be very similar to that of Proposition 3.3. Fix ε > 0 and, for any tree G, let β ′ (G) = |G|/(log |G|) 1+ε . Then let Claim 3.6 For any A > 0 and any tree G at least one of the following statements is true: • G has a vertex of degree at least |G|/(log |G|) 1+ε ; • for some k ∈ {1, 2, 3}, there exist disjoint connected subtrees G 1 , . . . , G N ⊆ G so that • there exists a decomposition G = G 1 ∪ · · · ∪ G N into disjoint connected subtrees with Proof. Fix A > 0. Assume that G is a tree with degrees bounded above by n/(log n) 1+ε . We again take a vertex x so that all the subtrees connected to x, denoted H 1 , . . . , H deg(x) , have no more than |G|/2 vertices each. Now define the sets of indices Note that at least one of the following holds: We also observe that (ii), (iii) and (iv) respectively imply Finally, note that the distance between H i and H j for i = j is equal to 2, since both H i and H j are connected to x. (c) if there exist disjoint and connected G 1 , . . . , Proof. The proofs of statements (a) and (c) are the same as those of (a) and (c) of Claim 3.5, respectively. Let us prove (b) using Proposition 2.11. In the notation of that proposition, we simply bound σ i,j ≤ |G| and let s = |G| 4 and t = 2 exp{N (log |G|) k }. Note that, if n is large enough, for each i we have by Proposition 3.3. Then, If n is large enough, this is smaller than 1/2, uniformly on N ∈ {1, . . . , n}. This shows that Choose r 0 large enough that 2 r 0 > n ⋆ and r 0 ≥ 192 (log 2) 2 , and choose A large enough that 1 12A < min(c star /2, β ′ r 0 ). Putting together Claims 3.6 and 3.7, we obtain the inequality From here, we conclude the proof exactly as in Proposition 3.3.
Proof of (1.3). For every ε > 0 and every graph G with at least two vertices, let Claim 3.8 For every ε > 0 there exists C ε > 0 such that, for any graph G, Proof. This follows from applying (1.2) with ε replaced by ε/2 and (2.1). Claim 3.9 For all ε > 0 there exists N ε ∈ N such that, if G is a tree and G 0 ⊆ G is a connected subtree with |G 0 | = N ε , then the contact process on G satisfies Proof. Let G be a tree with a connected subtree G 0 . Choose a sequence of connected subtrees The desired result now follows from observing that We are now ready to conclude. Let G be a tree with |G| ≥ N ε . Also let A ⊆ G, A = ∅, and x ∈ A. Fix a connected subtree G 0 ∋ x with |G 0 | = N ε . Then, where we define Noting that the set of pairs (G ′ , z) over which the infimum is taken is finite, and the probability is positive for each pair, we obtain θ(n) > 0 for each n. So (1.3) is now proved for n large enough. We can now choose c ε small to cover the remaining values of n. Here we will recall some facts about the one-dimensional contact process in order to prove the two first statements of the lemma. The third one (2.5) is proved in [15], see (3.11) in Part I of that book.
We observe that it is sufficient to prove that these statements hold for n large enough, as we can then lower the value of c line , if necessary, to take care of the remaining values of n.
We will need to simultaneously consider the contact process on the integer line Z (which we denote by (ζ t )) and on the line segment {0, . . . , n} (denoted by (ξ t )). Our previous conventions about superscripts still apply; for example, (ζ x t ) and (ζ 1 t ) are the processes on Z started respectively from only x infected and full occupancy.
Proof of Lemma 2.5
The result is a straightforward adaption of Lemma 3.1 in [19]. That lemma implies that there exists c > 0 such that the following holds (o denotes the central vertex of the star and ℓ denotes Lebesgue measure on [0, ∞)): (The mentioned lemma is stated with the assumption that λ > 1, but the proof works equally well here). This already implies (2.8).
Moreover, by a straightforward computation, it can be shown that Together with a union bound this implies that, with probability larger than 1 − 2ne −c ′′′ n , the following event occurs: This proves (2.7), as one can observe that the intersection of the above event with the event on the right-hand side of (4.9) is empty.
|
2015-09-14T15:11:12.000Z
|
2015-09-14T00:00:00.000
|
{
"year": 2017,
"sha1": "b1b9a5da064d03eb6239e756d19565f50408da79",
"oa_license": null,
"oa_url": "https://pure.rug.nl/ws/files/85349427/Schapira_Valesin2017_Article_ExtinctionTimeForTheContactPro.pdf",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "b1b9a5da064d03eb6239e756d19565f50408da79",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
}
|
218960859
|
pes2o/s2orc
|
v3-fos-license
|
Ensemble Deep Learning Models for Forecasting Cryptocurrency Time-Series
Nowadays, cryptocurrency has infiltrated almost all financial transactions; thus, it is generally recognized as an alternative method for paying and exchanging currency. Cryptocurrency trade constitutes a constantly increasing financial market and a promising type of profitable investment; however, it is characterized by high volatility and strong fluctuations of prices over time. Therefore, the development of an intelligent forecasting model is considered essential for portfolio optimization and decision making. The main contribution of this research is the combination of three of the most widely employed ensemble learning strategies: ensemble-averaging, bagging and stacking with advanced deep learning models for forecasting major cryptocurrency hourly prices. The proposed ensemble models were evaluated utilizing state-of-the-art deep learning models as component learners, which were comprised by combinations of long short-term memory (LSTM), Bi-directional LSTM and convolutional layers. The ensemble models were evaluated on prediction of the cryptocurrency price on the following hour (regression) and also on the prediction if the price on the following hour will increase or decrease with respect to the current price (classification). Additionally, the reliability of each forecasting model and the efficiency of its predictions is evaluated by examining for autocorrelation of the errors. Our detailed experimental analysis indicates that ensemble learning and deep learning can be efficiently beneficial to each other, for developing strong, stable, and reliable forecasting models.
Introduction
The global financial crisis of 2007-2009 was the most severe crisis over the last few decades with, according to the National Bureau of Economic Research, a peak to trough contraction of 18 months. The consequences were severe in most aspects of life including economy (investment, productivity, jobs, and real income), social (inequality, poverty, and social tensions), leading in the long run to political instability and the need for further economic reforms. In an attempt to "think outside the box" and bypass the governments and financial institutions manipulation and control, Satoshi Nakamoto [1] proposed Bitcoin which is an electronic cash allowing online payments, where the double-spending problem was elegantly solved using a novel purely peer-to-peer decentralized blockchain along with a cryptographic hash function as a proof-of-work.
Nowadays, there are over 5000 cryptocurrencies available; however, when it comes to scientific research there are several issues to deal with. The large majority of these are relatively new, indicating that there is an insufficient amount of data for quantitative modeling or price forecasting. In the same manner, they are not highly ranked when it comes to market capitalization to be considered as market drivers. A third aspect which has not attracted attention in the literature is the separation of cryptocurrencies between mineable and non-mineable. Minable cryptocurrencies have several advantages i.e., the performance of different mineable coins can be monitored within the same blockchain which cannot be easily said for non-mineable coins, and they are community driven open source where different developers can contribute, ensuring the fact that a consensus has to be reached before any major update is done, in order to avoid splitting. Finally, when it comes to the top cryptocurrencies, it appears that mineable cryptocurrencies like Bitcoin (BTC) and Ethereum (ETH), recovered better the 2018 crash rather than Ripple (XRP) which is the highest ranked pre-mined coin. In addition, the non-mineable coins transactions are powered via a centralized blockchain, endangering price manipulation through inside trading, since the creators keep a given percentage to themselves, or through the use of pump and pull market mechanisms. Looking at the number one cryptocurrency exchange in the world, Coinmarketcap, by January 2020 at the time of writing there are only 31 mineable cryptocurrencies out of the first 100, ranked by market capitalization. The classical investing strategy in cryptocurrency market is the "buy, hold and sell" strategy, in which cryptocurrencies are bought with real money and held until reaching a higher price worth selling in order for an investor to make a profit. Obviously, a potential fractional change in the price of a cryptocurrency may gain opportunities for huge benefits or significant investment losses. Thus, the accurate prediction of cryptocurrency prices can potentially assist financial investors for making their proper investment policies by decreasing their risks. However, the accurate prediction of cryptocurrency prices is generally considered a significantly complex and challenging task, mainly due to its chaotic nature. This problem is traditionally addressed by the investor's personal experience and consistent watching of exchanges prices. Recently, the utilization of intelligent decision systems based on complicated mathematical formulas and methods have been adopted for potentially assisting investors and portfolio optimization.
Let y 1 , y 2 , . . . , y n be the observations of a time series. Generally, a nonlinear regression model of order m is defined by y t = f (y t−1 , y t−2 , . . . , y t−m , θ), where m values of y t , θ is the parameter vector. After the model structure has been defined, function f (·) can be determined by traditional time-series methods such as ARIMA (Auto-Regressive Integrated Moving Average) and GARCH-type models and their variations [2][3][4] or by machine learning methods such as Artificial Neural Networks (ANNs) [5,6]. However, both mentioned approaches fail to depict the stochastic and chaotic nature of cryptocurrency time-series and be successfully effective for accurate forecasting [7]. To this end, more sophisticated algorithmic approaches have to be applied such as deep learning and ensemble learning methods. From the perspective of developing strong forecasting models, deep learning and ensemble learning constitute two fundamental learning strategies. The former is based on neural networks architectures and it is able to achieve state-of-the-art accuracy by creating and exploiting new more valuable features by filtering out the noise of the input data; while the latter attempts to generate strong prediction models by exploiting multiple learners in order to reduce the bias or variance of error.
During the last few years, researchers paid special attention to the development of time-series forecasting models which exploit the advantages and benefits of deep learning techniques such as convolutional and long short-term memory (LSTM) layers. More specifically, Wen and Yuan [8] and Liu et al. [9] proposed Convolutional Neural Network (CNN) and LSTM prediction models for stock market forecasting. Along this line, Livieris et al. [10] and Pintelas et al. [11] proposed CNN-LSTM models with various architectures for efficiently forecasting gold and cryptocurrency time-series price and movement, reporting some interesting results. Nevertheless, although deep learning models are tailored to cope with temporal correlations and efficiently extract more valuable information from the training set, they failed to generate reliable forecasting models [7,11]; while in contrast ensemble learning models although they are an elegant solution to develop stable models and address the high variance of individual forecasting models, their performance heavily depends on the diversity and accuracy of the component learners. Therefore, a time-series prediction model, which exploits the benefits of both mentioned methodologies may significantly improve the prediction performance.
The main contribution of this research is the combination of ensemble learning strategies with advanced deep learning models for forecasting cryptocurrency hourly prices and movement. The proposed ensemble models utilize state-of-the-art deep learning models as component learners which are based on combinations of Long Short-Term Memory (LSTM), Bi-directional LSTM (BiLSTM) and convolutional layers. An extensive experimental analysis is performed considering both classification and regression problems, to evaluate the performance of averaging, bagging and stacking ensemble strategies. More analytically, all ensemble models are evaluated on prediction of the cryptocurrency price on the next hour (regression) and also on the prediction if the price on the following hour will increase or decrease with respect to the current price (classification). It is worth mentioning that the information of predicting the movement of a cryptocurrency is probably more significant that the prediction of the price for investors and financial institutions. Additionally, the efficiency of the predictions of each forecasting model is evaluated by examining for autocorrelation of the errors, which constitutes a significant reliability test of each model.
The remainder of the paper is organized as follows: Section 2 presents a brief review of state of the art deep learning methodologies for cryptocurrency forecasting. Section 3 presents the advanced deep learning models, while Section 4 presents the ensemble strategies utilized in our research. Section 5 presents our experimental methodology including the data preparation and preprocessing as well as the detailed experimental analysis, regarding the evaluation of ensemble of deep learning models. Section 6 discusses the obtained results and summarizes our findings. Finally, Section 7 presents our conclusions and presents some future directions.
Deep Learning in Cryptocurrency Forecasting: State-of-the-Art
Yiying and Yeze [12] focused on the price non-stationary dynamics of three cryptocurrencies Bitcoin, Etherium, and Ripple. Their approach aimed at identifying and understand the factors which influence the value formation of these digital currencies. Their collected data contained 1030 trading days regarding opening, high, low, and closing prices. They conducted an experimental analysis which revealed the efficiency of LSTM models over classical ANNs, indicating that LSTM models are more capable of exploiting information hidden in historical data. Additionally, the authors stated that probably the reason for the efficiency of LSTM networks is that they tend to depend more on short-term dynamics while ANNs tends to depend more on long-term history. Nevertheless, in case enough historical information is given, ANNs can achieve similar accuracy to LSTM networks.
Nakano et al. [13] examined the performance of ANNs for the prediction of Bitcoin intraday technical trading. The authors focused on identifying the key factors which affect the prediction performance for extracting useful trading signals of Bitcoin from its technical indicators. For this purposed, they conducted a series of experiments utilizing various ANN models with shallow and deep architectures and datasets structures The data utilized in their research regarded Bitcoin time-series return data at 15-min time intervals. Their experiments illustrated that the utilization of multiple technical indicators could possibly prevent the prediction model from overfitting of non-stationary financial data, which enhances trading performance. Moreover, they stated that their proposed methodology attained considerably better performance than the primitive technical trading and buy-and-hold strategies, under realistic assumptions of execution costs.
Mcnally et al. [14] utilized two deep learning models, namely a Bayesian-optimised Recurrent Neural Network and a LSTM network, for Bitcoin price prediction. The utilized data ranged from the August 2013 until July 2016, regarding open, high, low and close of Bitcoin prices as well as the block difficulty and hash rate. Their performance evaluation showed that the LSTM network demonstrated the best prediction accuracy, outperforming the other recurrent model as well as the classical statistical method ARIMA. Shintate and Pichl [15] proposed a new trend prediction classification framework which is based on deep learning techniques. Their proposed framework utilized a metric learning-based method, called Random Sampling method, which measures the similarity between the training samples and the input patterns. They used high frequency data (1-min) ranged from June 2013 to March 2017 containing historical data from OkCoin Bitcoin market (Chinese Yuan Renminbi and US Dollars). The authors concluded that the profit rates based on utilized sampling method considerably outperformed those based on LSTM networks, confirming the superiority of the proposed framework. In contrast, these profit rates were lower than those obtained of the classical buy-and-hold strategy; thus they stated that it does not provide a basis for trading.
Miura et al. [16] attempted to analyze the high-frequency Bitcoin (1-min) time series utilizing machine learning and statistical forecasting models. Due to the large size of the data, they decided to aggregate the realized volatility values utilizing 3-h long intervals. Additionally, they pointed out that these values presented a weak correlation based on high-low price extent with the relative values of the 3-h interval. In their experimental analysis, they focused on evaluating various ANNs-type models, SVMs and Ridge Regression and the Heterogeneous Auto-Regressive Realized Volatility model. Their results demonstrated that Ridge Regression considerably presented the best performance while SVM exhibited poor performance.
Ji et al. [17] evaluated the prediction performance on Bitcoin price of various deep learning models such as LSTM networks, convolutional neural networks, deep neural networks, deep residual networks and their combinations. The data used in their research, contained 29 features of the Bitcoin blockchain from 2590 days (from 29 November 2011 to 31 December 2018). They conducted a detailed experimental procedure considering both classification and regression problems, where the former predicts whether or not the next day price will increase or decrease and the latter predicts the next day's Bitcoin price. The numerical experiments illustrated that the deep neural DNN-based models performed the best for price ups-and-downs while the LSTM models slightly outperformed the rest of the models for forecasting Bitcoin's price.
Kumar and Rath [18] focused on forecasting the trends of Etherium prices utilizing machine learning and deep learning methodologies. They conducted an experimental analysis and compared the prediction ability of LSTM neural networks and Multi-Layer perceptron (MLP). They utilized daily, hourly, and minute based data which were collected from the CoinMarket and CoinDesk repositories. Their evaluation results illustrated that LSTM marginally outperformed MLP but not considerably, although their training time was significantly high.
Pintelas et al. [7,11] conducted a detailed research, evaluating advanced deep learning models for predicting major cryptocurrency prices and movements. Additionally, they conducted a detailed discussion regarding the fundamental research questions: Can deep learning algorithms efficiently predict cryptocurrency prices? Are cryptocurrency prices a random walk process? Which is a proper validation method of cryptocurrency price prediction models? Their comprehensive experimental results revealed that even the LSTM-based and CNN-based models, which are generally preferable for time-series forecasting [8][9][10], were unable to generate efficient and reliable forecasting models. Moreover, the authors stated that cryptocurrency prices probably follow an almost random walk process while few hidden patterns may probably exist. Therefore, new sophisticated algorithmic approaches should be considered and explored for the development of a prediction model to make accurate and reliable forecasts.
In this work, we advocate combining the advantages of ensemble learning and deep learning for forecasting cryptocurrency prices and movement. Our research contribution aims on exploiting the ability of deep learning models to learn the internal representation of the cryptocurrency data and the effectiveness of ensemble learning for generating powerful forecasting models by exploiting multiple learners for reducing the bias or variance of error. Furthermore, similar to our previous research [7,11], we provide detailed performance evaluation for both regression and classification problems. To the best of our knowledge, this is the first research devoted to the adoption and combination of ensemble learning and deep learning for forecasting cryptocurrencies prices and movement.
Long Short-Term Memory Neural Networks
Long Short-term memory (LSTM) [19] constitutes a special case of recurrent neural networks which were originally proposed to model both short-term and long-term dependencies [20][21][22]. The major novelty unit in a LSTM network is the memory block in the recurrent hidden layer which contains memory cells with self-connections memorizing the temporal state and adaptive gate units for controlling the information flow in the block. With the treatment of the hidden layer as a memory unit, LSTM can cope the correlation within time-series in both short and long term [23].
More analytically, the structure of the memory cell c t is composed by three gates: the input gate, the forget gate and the output gate. At every time step t, the input gate i t determines which information is added to the cell state S t (memory), the forget gate f t determines which information is thrown away from the cell state through the decision by a transformation function in the forget gate layer; while the output gate o t determines which information from the cell state will be used as output.
With the utilization of gates in each cell, data can be filtered, discarded or added. In this way, LSTM networks are capable of identifying both short and long term correlation features within time series. Additionally, it is worth mentioning that a significant advantage of the utilization of memory cells and adaptive gates which control information flow is that the vanishing gradient problem can be considerably addressed, which is crucial for the generalization performance of the network [20].
The simplest way to increase the depth and the capacity of LSTM networks is to stack LSTM layers together, in which the output of the (L − 1)th LSTM layer at time t is treated as input of the Lth layer. Notice that this input-output connections are the only connections between the LSTM layers of the network. Based on the above formulation, the structure of the stacked LSTM can be describe as follows: Let h L t and h L−1 t denote outputs in the Lth and (L − 1)th layer, respectively. Each layer L produces a hidden state h L t based on the current output of the previous layer h L−1 t and time h L t−1 . More specifically, the forget gate f L t of the L layer calculates the input for cell state c L t−1 by where σ(·) is a sigmoid function while W L f and b f are the weights matrix and bias vector of layer L regarding the forget gate, respectively. Subsequently, the input gate i L t of the L layer computes the values to be added to the memory cell c L t by where W L i is the weights matrix of layer L regarding the input gate. Then, the output gate o L t of the Lth layer filter the information and calculated the output value by where W o f and b o are the weights matrix and bias vector of the output gate in the L layer, respectively. Finally, the output of the memory cell is computed by where · denotes the pointwise vector multiplication, tanh the hyperbolic tangent function and
Bi-Directional Recurrent Neural Networks
Similar with the LSTM networks, one of the most efficient and widely utilized RNNs architectures are the Bi-directional Recurrent Neural Networks (BRNNs) [24]. In contrast with the LSTM, these networks are composed by two hidden layers, connected to input and output. The principle idea of BRNNs is that each training sequence is presented forwards and backwards into two separate recurrent networks [20]. More specifically, the first hidden layer possesses recurrent connections from the past time steps, while in the second one, the recurrent connections are reversed, transferring activation backwards along the sequence. Given the input and target sequences, the BRNN can be unfolded across time in order to be efficiently trained utilizing a classical backpropagation algorithm.
In fact, BRNN and LSTM are based on compatible techniques in which the former proposes the wiring of two hidden layers, which compose the network, while the latter proposes a new fundamental unit for composing the hidden layer.
Along this line, Bi-directional LSTM (BiLSTM) networks [25] were proposed in the literature, which incorporate two LSTM networks in the BRNN framework. More specifically, BiLSTM incorporates a forward LSTM layer and a backward LSTM layer in order to learn information from preceding and following tokens. In this way, both past and future contexts for a given time t are accessed, hence better prediction can be achieved by taking advantage of more sentence-level information.
In a bi-directional stacked LSTM network, the output of each feed-forward LSTM layer is the same as in the classical stacked LSTM layer and these layers are iterated from t = 1 to T. In contrast, the output of each backward LSTM layer is iterated reversely, i.e., from t = T to 1. Hence, at time t, the output of value ← h t in backward LSTM layer L can be calculated as follows Finally, the output of this BiLSTM architecture is given by
Convolutional Neural Networks
Convolutional Neural Network (CNN) models [26,27] were originally proposed for image recognition problems, achieving human level performance in many cases. CNNs have great potential to identify the complex patterns hidden in time series data. The advantage of the utilization of CNNs for time series is that they can efficiently extract knowledge and learn an internal representation from the raw time series data directly and they do not require special knowledge from the application domain to filter input features [10].
A typical CNN consists of two main components: In the first component, mathematical operations, called convolution and pooling, are utilized to develop features of the input data while in the second component, the generated features are used as input to a usually fully-connected neural network.
The convolutional layer constitutes the core of a CNN which systematically applies trained filters to input data for generating feature maps. Convolution can be considered as applying and sliding a one dimension (time) filter over the time series [28]. Moreover, since the output of a convolution is a new filtered time series, the application of several convolutions implies the generation of a multivariate times series whose dimension equals with the number of utilized filters in the layer. The rationale behind this strategy is that the application of several convolution leads to the generation of multiple discriminative features which usually improve the model's performance. In practice, this kind of layer is proven to be very efficient and stacking different convolutional layers allows deeper layers to learn high-order or more abstract features and layers close to the input to learn low-level features.
Pooling layers were proposed to address the limitation that feature maps generated by the convolutional layers, record the precise position of features in the input. These layers aggregate over a sliding window over these feature maps, reducing their length, in order to attain some translation invariance of the trained features. More analytically, the feature maps obtained from the previous convolutional layer are pooled over temporal neighborhood separately by sum pooling function or by max pooling function in order to developed a new set of pooled feature maps. Notice that the output pooled feature maps constitute a filtered version of the features maps which are imported as inputs in the pooling layer [28]. This implies that small translations of the inputs of the CNN, which are usually detected by the convolutional layers, will become approximately invariant.
Finally, in addition to convolutional and pooling layers, some include batch normalization layers [29] and dropout layers [30] in order to accelerate the training process and reduce overfitting, respectively.
Ensemble Deep Learning Models
Ensemble learning has been proposed as an elegant solution to address the high variance of individual forecasting models and reduce the generalization error [31][32][33]. The basic principle behind any ensemble strategy is to weigh a number of models and combine their individual predictions for improving the forecasting performance; while the key point for the effectiveness of the ensemble is that its components should be characterized by accuracy and diversity in their predictions [34]. In general, the combination of multiple models predictions adds a bias which in turn counters the variance of a single trained model. Therefore, by reducing the variance in the predictions, the ensemble can perform better than any single best model.
In the literature, several strategies were proposed to design and develop ensemble of regression models. Next, we present three of the most efficient and widely employed strategies: ensemble-averaging, bagging, and stacking.
Ensemble-Averaging of Deep Learning Models
Ensemble-averaging [35] (or averaging) is the simplest combination strategy for exploiting the prediction of different regression models. It constitutes a commonly and widely utilized ensemble strategy of individual trained models in which their predictions are treated equally. More specifically, each forecasting model is individually trained and the ensemble-averaging strategy linearly combines all predictions by averaging them to develop the output. Figure 1 illustrates a high-level schematic representation of the ensemble-averaging of deep learning models. Ensemble-averaging is based on the philosophy that its component models will not usually make the same error on new unseen data [36]. In this way, the ensemble model reduces the variance in the prediction, which results in better predictions compared to a single model. The advantages of this strategy are its simplicity of implementation and the exploitation of the diversity of errors of its component models without requiring any additional training on large quantities of the individual predictions.
Bagging Ensemble of Deep Learning Models
Bagging [33] is one the most widely used and successful ensemble strategies for improving the forecasting performance of unstable models. Its basic principle is the development of more diverse forecasting models by modifying the distribution of the training set based on a stochastic strategy. More specifically, it applies the same learning algorithm on different bootstrap samples of the original training set and the final output is produced via a simple averaging. An attractive property of the bagging strategy is that it reduces variance while simultaneously retains the bias which assists in avoiding overfitting [37,38]. Figure 2 demonstrates a high-level schematic representation of the bagging ensemble of n deep learning models. It is worth mentioning that bagging strategy is significantly useful for dealing with large and high-dimensional datasets where finding a single model which can exhibit good performance in one step is impossible due to the complexity and scale of the prediction problem.
Stacking Ensemble of Deep Learning Models
Stacked generalization or stacking [39] constitutes a more elegant and sophisticated approach for combining the prediction of different learning models. The motivation of this approach is based on the limitation of simple ensemble-average which is that each model is equally considered to the ensemble prediction, regardless of how well it performed. Instead, stacking induces a higher-level model for exploiting and combining the prediction of the ensemble's component models. More specifically, the models which comprise the ensemble (Level-0 models) are individually trained using the same training set (Level-0 training set). Subsequently, a Level-1 training set is generated by the collected outputs of the component classifiers.
This dataset is utilized to train a single Level-1 model (meta-model) which ultimately determines how the outputs of the Level-0 models should be efficiently combined, to maximize the forecasting performance of the ensemble. Figure 3 illustrates the stacking of deep learning models.
Training set In general, stacked generalization works by deducing the biases of the individual learners with respect to the training set [33]. This deduction is performed by the meta-model. In other words, the meta-model is a special case of weighted-averaging, which utilizes the set of predictions as a context and conditionally decides to weight the input predictions, potentially resulting in better forecasting performance.
Numerical Experiments
In this section, we evaluate the performance of the three presented ensemble strategies which utilize advanced deep learning models as component learners. The implementation code was written in Python 3.4 while for all deep learning models Keras library [40] was utilized and Theano as back-end.
For the purpose of this research, we utilized data from 1 January 2018 to 31 August 2019 from the hourly price of the cryptocurrencies BTC, ETH and XRP. For evaluation purposes, the data were divided in training set and in testing set as in [7,11]. More specifically, the training set comprised data from 1 January 2018 to 28 February 2019 (10,177 datapoints), covering a wide range of long and short term trends while the testing set consisted of data from 1 March 2019 to 31 August 2019 (4415 datapoints) which ensured a substantial amount of unseen out-of-sample prices for testing.
Next, we concentrated on the experimental analysis to evaluate the presented ensemble strategies using the advanced deep learning models CNN-LSTM and CNN-BiLSTM as base learners. A detailed description of both component models is presented in Table 1. These models and their hyper-parameters were selected in previous research [7] after extensive experimentation, in which they exhibited the best performance on the utilized datasets. Both component models were trained for 50 epochs with Adaptive Moment Estimation (ADAM) algorithm [41] with a batch size equal to 512, using a mean-squared loss function. ADAM algorithm ensures that the learning steps, during the training process, are scale invariant relative to the parameter gradients. Table 1. Parameter specification of two base learners. The performance of all ensemble models was evaluated utilizing the performance metric: Root Mean Square Error (RMSE). Additionally, the classification accuracy of all ensemble deep models was measured, relative to the problem of predicting whether the cryptocurrency price would increase or decrease on the next day. More analytically, by analyzing a number of previous hourly prices, the model predicts the price on the following hour and also predicts if the price will increase or decrease, with respect to current cryptocurrency price. For this binary classification problem, three performance metrics were used: Accuracy (Acc), Area Under Curve (AUC) and F 1 -score (F 1 ).
CNN-LSTM
All ensemble models were evaluated using 7 and 11 component learners which reported the best overall performance. Notice that any attempt to increase the number of classifiers resulted to no improvement to the performance of each model. Moreover, stacking was evaluated using the most widely used state-of-the-art algorithms [42] as meta-learners: Support Vector Regression (SVR) [43], Linear Regression (LR) [44], k-Nearest Neighbor (kNN) [45] and Decision Tree Regression (DTR) [46]. For fairness and for performing an objective comparison, the hyper-parameters of all meta-learners were selected in order to maximize their experimental performance and are briefly presented in Table 2. Summarizing, we evaluate the performance of the following ensemble models: • "Averaging 7 " and "Averaging 11 " stand for ensemble-averaging model utilizing 7 and 11 component learners, respectively. • "Bagging 7 " and "Bagging 11 " stand for bagging ensemble model utilizing 7 and 11 component learners, respectively. reported the same performance, which implies that the increment of component learners from 7 to 11 did not affect the regression performance of this ensemble algorithm. In contrast, stacking (LR) 7 exhibited better classification performance than Stacking (LR) 11 , reporting higher accuracy, AUC and F 1 -score. Additionally, the stacking ensemble reported the worst performance utilizing DTR and SVR as meta-learners among all ensemble models, also reporting worst performance than CNN-LSTM model; while the best classification performance was reported using kNN as meta-learner in almost all cases.
The average and bagging ensemble reported slightly better regression performance, compared to the single model CNN-LSTM. In contrast, both ensembles presented the best classification performance, considerably outperforming all other forecasting models, regarding all datasets. Moreover, the bagging ensemble reported the highest accuracy, AUC and F 1 in most cases, slightly outperforming the average-ensemble model. Finally, it is worth noticing that both the bagging and average-ensemble did not improve their performance when the number of component classifiers increased. for m = 4 while for m = 9 a slightly improvement in their performance was noticed. presented slightly higher accuracy, AUC and F 1 -score than stacking (LR) 11 , for ETH and XRP datasets, while for BTC dataset Stacking (LR) 11 reported slightly better classification performance. This implies that the increment of component learners from 7 to 11 did not considerably improved and affected the regression and classification performance of the stacking ensemble algorithm. Stacking ensemble reported the worst (highest) RMSE score utilizing DTR, SVR and kNN as meta-learners. It is also worth mentioning that it exhibited the worst performance among all ensemble models and also worst than that of the single model CNN-BiLSTM. However, stacking ensemble reported the highest classification performance using kNN as meta-learner. Additionally, it presented slightly better classification performance using DTR or SVR than LR as meta-learners for ETH and XRP datasets, while for BTC dataset it presented better performance using LR as meta-learner as meta-learner. Regarding the other two ensemble strategies, averaging and bagging, they exhibited slightly better regression performance compared to the single CNN-BiLSTM model. Nevertheless, both averaging and bagging reported the highest accuracy, AUC and F 1 -score, which implies that they presented the best classification performance among all other models with bagging exhibiting slightly better classification performance. Furthermore, it is also worth mentioning that both ensembles slightly improved their performance in term of RMSE score and Accuracy, when the number of component classifiers increased from 7 to 11.
In the follow-up, we provided a deeper insight classification performance of the forecasting models by presenting the confusion matrices of averaging 11 , bagging 11 for m = 4, which exhibited the best overall performance. The use of the confusion matrix provides a compact and to the classification performance of each model, presenting complete information about mislabeled classes. Notice that each row of a confusion matrix represents the instances in an actual class while each column represents the instances in a predicted class. Additionally, both stacking ensembles utilizing DTR and SVM as meta-learners were excluded from the rest of our experimental analysis, since they presented the worst regression and classification performance, relative to all cryptocurrencies. Tables 7-9 present the confusion matrices of the best identified ensemble models using CNN-LSTM as base learner, regarding BTC, ETH and XRP datasets, respectively. The confusion matrices for BTC and ETH revealed that stacking (LR) 7 is biased, since most of the instances were misclassified as "Down", meaning that this model was unable to identify possible hidden patterns despite the fact that it exhibited the best regression performance. On the other hand, bagging 11 exhibited a balanced prediction distribution between "Down" or "Up" predictions, presenting its superiority over the rest forecasting models, followed by averaging 11 presented the highest prediction accuracy and the best trade-off between true positive and true negative rate, meaning that these models may have identified some hidden patters. using CNN-BiLSTM as base learner, regarding BTC, ETH and XRP datasets, respectively. The confusion matrices for BTC dataset demonstrated that both average 11 and bagging 11 presented the best performance while stacking (LR) 7 was biased, since most of the instances were misclassified as "Down". Regarding ETH dataset, both average 11 and bagging 11 were considered biased since most "Up" instances were misclassified as "Down". In contrast, both stacking ensembles presented the best performance, with stacking (kNN) 11 reporting slightly considerably better trade-off between sensitivity and specificity. Regarding XRP dataset, bagging 11 presented the highest prediction accuracy and the best trade-off between true positive and true negative rate, closely followed by stacking (kNN) 11 . In the rest of this section, we evaluate the reliability of the best reported ensemble models by examining if they have properly fitted the time series. In other words, we examine if the models' residuals defined byˆ t = y t −ŷ t are identically distributed and asymptotically independent. It is worth noticing the residuals are dedicated to evaluate whether the model has properly fitted the time series.
For this purpose, we utilize the AutoCorrelation Function (ACF) plot [47] which is obtained from the linear correlation of each residualˆ t to the others in different lags,ˆ t−1 ,ˆ t−2 , . . . and illustrates the intensity of the temporal autocorrelation. Notice that in case the forecasting model violates the assumption of no autocorrelation in the errors implies that its predictions may be inefficient since there is some additional information left over which should be accounted by the model. Figures 4-6 present the ACF plots for BTC, ETH and XRP datasets, respectively. Notice that the confident limits (blue dashed line) are constructed assuming that the residuals follow a Gaussian probability distribution. It is worth noticing that averaging 11 and bagging 11 ensemble models violate the assumption of no autocorrelation in the errors which suggests that their forecasts may be inefficient, regarding BTC and ETH datasets. More specifically, the significant spikes at lags 1 and 2 imply that there exists some additional information left over which should be accounted by the models. Regarding XRP dataset, the ACF plot of average 11 presents that the residuals have no autocorrelation; while the ACF plot of bagging 11 presents that there is a spike at lag 1, which violates the assumption of no autocorrelation in the residuals. Both ACF plots of stacking ensemble are within 95% percent confidence interval for all lags, regarding BTC and XRP datasets, which verifies that the residuals have no autocorrelation. Regarding the ETH dataset, the ACF plot of stacking (LR) 7 reported a small spike at lag 1, which reveals that there is some autocorrelation of the residuals but not particularly large; while the ACF plot of stacking (kNN) 11 reveals that there exist small spikes at lags 1 and 2, implying that there is some autocorrelation. ensembles utilizing CNN-BiLSTM as base learner for BTC, ETH and XRP datasets, respectively. Both averaging 11 and bagging 11 ensemble models violate the assumption of no autocorrelation in the errors, relative to all cryptocurrencies, implying that these models are not properly fitted the time-series. In more detail, the significant spikes at lags 1 and 2 suggest that the residuals are not identically distributed and asymptotically independent, for all datasets . The ACF plot of stacking present that there exists some autocorrelation in the residuals but not particularly large for BTC and XRP datasets; while for ETH dataset, the significant spikes at lags 1 and 2 suggest that the model's prediction may be inefficient.
Discussion
In this section, we perform a discussion regarding the proposed ensemble models, the experimental results and the main finding of this work.
Discussion of Proposed Methodology
Cryptocurrency prediction is considered a very challenging forecasting problem, since the historical prices follow a random walk process, characterized by large variations in the volatility, although a few hidden patterns may probably exist [48,49]. Therefore, the investigation and the development of a powerful forecasting model for assisting decision making and investment policies is considered essential. In this work, we incorporated advanced deep learning models as base learners into three of the most popular and widely used ensembles methods, namely averaging, bagging and stacking for forecasting cryptocurrency hourly prices.
The motivation behind our approach is to exploit the advantages of ensemble learning and advanced deep learning techniques. More specifically, we aim to exploit the effectiveness of ensemble learning for reducing the bias or variance of error by exploiting multiple learners and the ability of deep learning models to learn the internal representation of the cryptocurrency data. It is worth mentioning that since the component deep learning learners are initialized with different weight states, this leads to the development of deep learning models each of which focuses on different identified patterns. Therefore, the combination of these learners via an ensemble learning strategy may lead to stable and robust prediction model.
In general, deep learning neural networks are powerful prediction models in terms of accuracy, but are usually unstable in sense that variations in their training set or in their weight initialization may significantly affect their performance. Bagging strategy constitutes an effective way of building efficient and stable prediction models, utilizing unstable and diverse base learners [50,51], aiming to reduce variance and avoid overfitting. In other words, bagging stabilizes the unstable deep learning base learners and exploits their prediction accuracy focusing on building an accurate and robust final prediction model. However, the main problem of this approach is that since bagging averages the predictions of all models, redundant and non-informative models may add too much noise on the final prediction result and therefore, possible identified patterns, by some informative and valuable models, may disappear.
On the other hand, stacking ensemble learning utilizes a meta-learner in order to learn the prediction behavior of the base learners, with respect to the final target output. Therefore, it is able to identify the redundant and informative base models and "weight them" in a nonlinear and more intelligent way in order to filter out useless and non-informative base models. As a result, the selection of the meta-learner is of high significance for the effectiveness and efficiency of this ensemble strategy.
Discussion of Results
All compared ensemble models were evaluated considering both regression and classification problems, namely for the prediction of the cryptocurrency price on the following hour (regression) and also for the prediction if the price will increase or decrease on the following hour (classification). Our experiments revealed that the incorporation of deep learning models into ensemble learning framework improved the prediction accuracy in most cases, compared to a single deep learning model.
Bagging exhibited the best overall score in terms of classification accuracy, closely followed by averaging and stacking (kNN) ; while stacking (LR) the best regression performance. The confusion matrices revealed that stacking (LR) as base learner was actually biased, since most of the instances were wrongly classified as "Down" while bagging and stacking (kNN) exhibited a balanced prediction distribution between "Down" or "Up" predictions. It is worth noticing that since bagging can be interpreted as a perturbation technique aiming at improving the robustness especially against outliers and highly volatile prices [37]. The numerical experiments demonstrated that averaging ensemble models trained on perturbed training dataset is a means to favor invariance to these perturbations and better capture the directional movements of the presented random walk processes. However, the ACF plots revealed that bagging ensemble models violate the assumption of no autocorrelation in the residuals, which implies that their predictions may be inefficient. In contrast, the ACF plots of stacking (kNN) revealed that the residuals have no or small (inconsiderable) autocorrelation. This is probably due to fact that the use of a meta-learner, which is trained on the errors of the base learners, is able to reduce the autocorrelation in the residuals and provide more reliable forecasts. Finally, it is worth mentioning that the increment of component learners had little or no effect to the regression performance of the ensemble algorithms, in most cases Summarizing, stacking utilizing advanced deep learning base learner and kNN as meta-learner may considered to be the best forecasting model for the problem of cryptocurrency price prediction and movement, based on our experimental analysis. Nevertheless, further research has to be performed in order to improve the prediction performance of our prediction framework by creating even more innovative and sophisticated algorithmic models. Moreover, additional experiments with respect to the trading-investment profit returns based on such prediction frameworks have to be also performed.
Conclusions
In this work, we explored the adoption of ensemble learning strategies with advanced deep learning models for forecasting cryptocurrency price and movement, which constitutes the main contribution of this research. The proposed ensemble models utilize state-of-the-art deep learning models as component learners, which are based on combinations of LSTM, BiLSTM and convolutional layers. An extensive and detailed experimental analysis was performed considering both classification and regression performance evaluation of averaging, bagging, and stacking ensemble strategies. Furthermore, the reliability and the efficiency of the predictions of each ensemble model was studied by examining for autocorrelation of the residuals.
Our numerical experiments revealed that ensemble learning and deep learning may efficiently be adapted to develop strong, stable, and reliable forecasting models. It is worth mentioning that due to the sensitivity of various hyper-parameters of the proposed ensemble models and their high complexity, it is possible that their prediction ability could be further improved by performing additional optimized configuration and mostly feature engineering. Nevertheless, in many real-world applications, the selection of the base learner as well as the specification of their number in an ensemble strategy constitute a significant choice in terms of prediction accuracy, reliability, and computation time/cost. Actually, this fact acts as a limitation of our approach. The incorporation of deep learning models (which are by nature computational inefficient) in an ensemble learning approach, would lead the total training and prediction computation time to be considerably increased. Clearly, such an ensemble model would be inefficient on real-time and dynamic applications tasks with high-frequency inputs/outputs, compared to a single model. However, on low-frequency applications when the objective is the accuracy and reliability, such a model could significantly shine.
Our future work is concentrated on the development of an accurate and reliable decision support system for cryptocurrency forecasting enhanced with new performance metrics based on profits and returns. Additionally, an interesting idea which is worth investigating in the future is that in certain times of global instability, we experience a significant number of outliers in the prices of all cryptocurrencies. To address this problem an intelligent system might be developed based on an anomaly detection framework, utilizing unsupervised algorithms in order to "catch" outliers or other rare signals which could indicate cryptocurrency instability.
Conflicts of Interest:
The authors declare no conflict of interest.
|
2020-05-21T00:10:52.519Z
|
2020-05-10T00:00:00.000
|
{
"year": 2020,
"sha1": "d779b3dc34b8ad8cfcbf549b74dab030af6d8449",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1999-4893/13/5/121/pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "c1ccd328f5f64efc91835a9b98edd3705ec6ab5d",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
226200008
|
pes2o/s2orc
|
v3-fos-license
|
Reducing patient harm following inadvertent endobronchial placecement of nasogastric tubes in patients with SARS-COV-2
Introduction Nasogastric tube (NGT) insertion is essential for enteral feeding but can potentially cause significant injury to the lungs (1). Following a critical incident, we audited our practice of NGT insertion and the consequences of injury in patients with Severe Acute Respiratory Syndrome COVID-19 caused by the (SARS-CoV-2) virus. Methods NGT insertion followed a local standard safety protocol and were inserted by consultants or senior registrars in anaesthesia and critical care medicine, or advanced critical care practitioners. Individual practitioners were able to choose their technique of insertion. All patients had their post-NGT insertion chest x-ray reviewed and those with misplaced NGTs had their case notes reviewed. Early in the outbreak, blind insertion was recommended in our institution to reduce aerosolisation, this was rapidly changed to direct visualisation with laryngoscopy as our experience managing SARS-CoV-2 patients increased. Results During the SARS-CoV-2 pandemic, a total of 135 NGTs were inserted into ventilated and/or extracorporeal membrane oxygenation (ECMO) patients. All of NGTs positioned were confirmed by a chest radiograph. Eleven (8.1%) were inadvertently endobronchial, of which four developed pneumothoraces (figure 1). Three patients (including both who had received ECMO) died and a fourth is currently undergoing a prolonged respiratory wean. No patients were fed or received drugs via a misplaced NGT. Chest radiograph of patient with inadvertent NGT placement in right lower lobe. Note the path of the tube suggests breech of the bronchial tree and direct injury to the lung parenchyma (arrowhead). A CT the following day showed a large pneumothorax (arrowhead), some haemothorax (black arrow) and severe ground glass changes consistent with SARS-CoV-2 (white arrow). Discussion Our inadvertent endobronchial NGT rate is relatively high, compared to our previous clinical experience, which we believe may be related to the challenges of working with cumbersome personal protective equipment and/or changed practice to attempt to reduce transmission of SARS-CoV-2 (2). We suspect the lung parenchyma is particularly fragile in acute respiratory distress syndrome caused by SARS-CoV-2, which contributes to the high rate of pleural breech and subsequent poor outcome (3). We recommend experienced operators place NGTs and do so using direct or videolaryngoscopy to minimise the risk of incorrect placement. We would like to thank the families of our patients for their permission to share the images in this work.
PP.56
Clinical audit of early extubation in a tertiary referral cardiac surgery unit E. O'Riordan, C. Keane, N. Dowd St James' Hospital -Department of Anaesthesiology, Dublin, Ireland Introduction: Early extubation is a recognised standard of care for cardiac surgery patients [1]. Multiple studies have shown that there is no increase in morbidity and mortality for patients extubated early following cardiac surgery [2,3], and that early extubation decreases Intensive Care Unit (ICU) stay [2,4]. The JAMA 2019 guidelines define early extubation as within 6 hours post-operatively. The aim of this study was to assess local compliance with international guidelines.
Methods: We performed a retrospective analysis of all cardiac surgery undertaken in our institution over a 1-year period (n = 343). 79% were male (n = 273) and average age at time of operation was 64 years. We excluded patients who were not admitted to ICU post operatively (9%, n = 31) or for whom incomplete data was available (5%, n = 20). A total of 292 patient electronic records were therefore analysed. The extubation time was recorded, as well as total length of ICU stay, and total post-operative stay.
Results: Of the 292 patients analysed, the median time for extubation was 5.5 hours. 58% of patients were intubated for 6 hours or less. We found a significantly shorter length of ICU stay for patients intubated for 6 hours or less (2.17 days vs 2.82 days, P = .0275). However, there was no significant difference in total post-operative stay between the two groups (9.3 vs 10.7 days, P = .4004).
Discussion: We are extubating over 50% of patients within 6 hours post operatively. In line with previous research there is statistically significant less overall time spent in ICU when patients are extubated within 6 hours post-operatively, however this does not affect total inpatient stay. Overall, regardless of time of extubation, the median ICU stay was short at 2.5 days. Introduction: Nasogastric tube (NGT) insertion is essential for enteral feeding but can potentially cause significant injury to the lungs (1). Following a critical incident, we audited our practice of NGT insertion and the consequences of injury in patients with Severe Acute Respiratory Syndrome COVID-19 caused by the (SARS-CoV-2) virus.
Methods: NGT insertion followed a local standard safety protocol and were inserted by consultants or senior registrars in anaesthesia and critical care medicine, or advanced critical care practitioners. Individual practitioners were able to choose their technique of insertion. All patients had their post-NGT insertion chest x-ray reviewed and those with misplaced NGTs had their case notes reviewed. Early in the outbreak, blind insertion was recommended in our institution to reduce aerosolisation, this was rapidly changed to direct visualisation with laryngoscopy as our experience managing SARS-CoV-2 patients increased.
Results: During the SARS-CoV-2 pandemic, a total of 135 NGTs were inserted into ventilated and/or extracorporeal membrane oxygenation (ECMO) patients. All of NGTs positioned were confirmed by a chest radiograph. Eleven (8.1%) were inadvertently endobronchial, of which four developed pneumothoraces (figure 1). Three patients (including both who had received ECMO) died and a fourth is currently undergoing a prolonged respiratory wean. No patients were fed or received drugs via a misplaced NGT. Chest radiograph of patient with inadvertent NGT placement in right lower lobe. Note the path of the tube suggests breech of the bronchial tree and direct injury to the lung parenchyma (arrowhead). A CT the following day showed a large pneumothorax (arrowhead), some haemothorax (black arrow) and severe ground glass changes consistent with SARS-CoV-2 (white arrow).
Discussion: Our inadvertent endobronchial NGT rate is relatively high, compared to our previous clinical experience, which we believe may be related to the challenges of working with cumbersome personal protective equipment and/or changed practice to attempt to reduce transmission of SARS-CoV-2 (2). We suspect the lung parenchyma is particularly fragile in acute respiratory distress syndrome caused by SARS-CoV-2, which contributes to the high rate of pleural breech and subsequent poor outcome (3). We recommend experienced operators place NGTs and do so using direct or videolaryngoscopy to minimise the risk of incorrect placement. We would like to thank the families of our patients for their permission to share the images in this work.
References: 1. Andresen EN, Frydland M, Usinger L. Deadly pressure pneumothorax after withdrawal of misplaced feeding tube: a case report. Journal of medical case reports. 2016;10:30 2. BAPEN. Covid-19 and enteral tube feeding safety Redditch, UK: BAPEN; 2020 [Available from: https://www.bapen.org. uk/pdfs/covid-19/covid-19-and-enteral-tube-feeding-safety-16-04-20.pdf 3. Rassias AJ, Ball PA, Corwin HL. A prospective study of tracheopulmonary complications associated with the placement of narrow-bore enteral feeding tubes. Critical care (London, England). 1998;2 (1) Introduction: COVID-19 induces a pro-inflammatory, hypercoagulable state with marked elevations of ferritin, C-reactive protein, interleukin, and D-dimers. Observed consequences include pro-thrombotic disseminated intravascular coagulation (DIC) with a high rate of venous thromboembolism (VTE) and elevated D-dimers with high fibrinogen and low anti-thrombin levels. Pulmonary congestion appears to be due to micro-vascular thrombosis and occlusion on pathological examination.1 The acquired pro-thrombotic state and associated poorer outcomes seen in critically ill COVID-19 patients 2,3 have led to such patients being treated empirically with systemic anticoagulants. Unfractionated heparin (UFH) or low molecular weight heparin (LMWH) have both been used.2,3 Methods: Review of COVID-19 positive adult patients admitted to the critical care unit between 10th March and 13th May 2020 with severe respiratory failure requiring invasive ventilation.
Discussion:
The risk for any significant haemorrhage in patients systemically anticoagulated for VTE with unfractionated heparin (UFH) is 2-3%, 4 and that of anticoagulantrelated intracranial haemorrhage (AICH) in patients systemically anticoagulated with UFH is 1-2.7% (in patients treated
|
2020-10-31T13:06:39.322Z
|
2020-10-01T00:00:00.000
|
{
"year": 2020,
"sha1": "26ce76e087d452ae3167323c0f232504b31ea302",
"oa_license": null,
"oa_url": "http://www.jcvaonline.com/article/S1053077020309460/pdf",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "26ce76e087d452ae3167323c0f232504b31ea302",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
260202188
|
pes2o/s2orc
|
v3-fos-license
|
A Person-Centered Analysis of Adolescent Multicultural Socialization Niches and Academic Functioning
Despite the growing cultural diversity worldwide, there is scarce research on how socialization processes prepare youth to respond to increasing multicultural demands and the degree to which these socialization opportunities inform youth academic functioning. This study used a person-centered approach to identify profiles or niches based on the degree and consistency of multicultural socialization experiences across school, peer, and family settings and to examine the associations between identified niches and markers of academic functioning (i.e., emotional and behavioral academic engagement, academic aspirations and expectations) in a sample of adolescents (N = 717; Mage = 13.73 years). Participants (49.9% girls) were from the U.S. Southwest and represented multiple ethno-racial backgrounds (31.8% Hispanic/Latinx, 31.5% Multiethnic, 25.7% White, 7.3% Black or African American, 1.4% Asian American or Pacific Islander, 1.4% American Indian or Alaska Native, and 1% Arab, Middle Eastern, or North African). Six distinct multicultural socialization niches were identified. Three niches had similar patterns across school-peer-family but ranged in the degree of socialization. The cross-setting similar higher socialization niche (Niche 6) demonstrated greater socialization than the cross-setting similar moderate (Niche 5) and lower socialization (Niche 4) niches, which had moderate and lower socialization, respectively. Three niches demonstrated cross-setting dissimilarity which ranged in the type of cross-setting contrast and the degree of socialization. The cross-setting dissimilar school contrast socialization niche (Niche 3) had greater dissimilarities between socialization opportunities in the school setting compared to the peer and family settings and demonstrated the lowest levels of socialization of all niches. The other two niches, the cross-setting dissimilar peer contrast (Niche 1) and greater peer contrast socialization (Niche 2) niches had larger dissimilarities between socialization opportunities in the peer setting than the school and family settings. In the former, however, the contrast was lower, and socialization ranged between very low to low. In the latter, the contrast was higher and socialization ranged from very low to moderate. Most adolescents were in the cross-setting similar lower socialization niche or in the cross-setting dissimilar niches. Adolescents in the cross-setting similar higher multicultural socialization demonstrated greater emotional and behavioral academic engagement than adolescents in most of the other niches. Adolescents in the cross-setting dissimilar school contrast niches demonstrated lower emotional and behavioral academic engagement and lower academic expectations than adolescents in some of the other niches. The results emphasize the collective role of school, peer, and family multicultural socialization on emotional and behavioral academic engagement.
Introduction
We live in a multicultural world defined by cultural diversity. Indeed, the United States (U.S.) is more ethnically and racially diverse than ever (U.S. Census Bureau, 2021, August 12), and similar growth has emerged worldwide (Pew Research Center, 2019, April 22); thus, youth interact more frequently with peers from multiple ethno-racial backgrounds (Nishina et al., 2019). These changing demographics have essential implications for the development and adjustment of youth from ethno-racial majoritized and minoritized groups (Berry et al., 2022), particularly for their academic adjustment as youth attend schools with growing ethno-racial diversity. There is scarce research on how socialization processes equip youth to respond to increasing multicultural demands and the degree to which these socialization experiences may inform youth academic functioning (i.e., academic engagement, aspirations, and expectations). This study addresses this gap by examining intercultural or multicultural socialization experiences across multiple settings (i.e., schools, peers, and families) and their links with youth academic functioning.
Intercultural or multicultural socialization involves efforts to teach youth about cultural pluralism and the importance of equal treatment across members from all ethno-racial groups (Berry & Sam, 2014). Multicultural socialization is theorized to be a critical process supporting youth academic functioning in multicultural societies (Barrett, 2018). Through multicultural socialization, salient proximal settings such as schools, peers, and families provide youth with opportunities to learn about multiple cultures, appreciate the value of cultural pluralism, and practice multicultural competencies (Berry et al., 2022). These opportunities and competencies are theorized to support youth's overall adjustment in multicultural societies (Barrett, 2018), particularly their academic functioning (Nishina et al., 2019).
Multiculturalism research at the individual and societal levels has substantially increased in the past decade (e.g., The Oxford Handbook of Multicultural Identity). This body of work points to the benefits and challenges youth experience in multicultural societies and notes variability across proximal settings (Benet-Martínez & Hong, 2014). Although limited, recent empirical work has focused on multicultural socialization in the school setting and provides evidence for the positive link between multicultural socialization and youth academic functioning (e.g., Byrd, 2019). However, multicultural socialization beyond the school setting, including peer and family settings, and how variability across these intersecting socialization settings informs youth academic functioning is unknown.
Guided by ecological models highlighting the role of intersecting forces across proximal settings informing youth's adjustment (e.g., Bronfenbrenner & Morris, 2006), the current study (1) identifies multicultural socialization niches defined by the opportunities afforded to youth to learn about cultural pluralism and the importance of equal treatment of all ethno-racial groups across school, peer, and family settings, and (2) examines how these niches inform adolescent academic functioning in a U.S. ethno-racially diverse sample. This study focuses on academic functioning -marked by emotional and behavioral academic engagement, academic aspirations, and academic expectations (Skinner et al., 2022)-because this is a multifaceted and salient developmental task that significantly decreases throughout adolescence (Eccles & Roeser, 2011) but has significant implications for future career (May & Witherspoon, 2019) and academic success (Wang & Peck, 2013).
Multicultural Socialization Niches
There is strong theoretical justification for considering the cross-setting, unique (person-centered) nature of adolescent multicultural socialization niches and how these unique niches inform adolescent academic functioning (e.g., White et al., 2018). During adolescence, socialization settings outside the family become increasingly salient (Crosnoe & Benner, 2015), particularly school and peer settings become prominent (Eccles & Roeser, 2011). Importantly, socialization processes are influenced by the beliefs and practices that characterize these settings (Super & Harkness, 2002), emerge from adaptive cultural models reflecting individual and societal values (White et al., 2018), and involve multidirectional, interactive processes between youth and their settings (Umaña-Taylor et al., 2013). It follows that crosssetting variability and intersecting forces shape the multicultural socialization niches which adolescents are negotiating (Super & Harkness, 2002), and these unique niches inform youth attitudes toward cultural diversity (Miklikowska et al., 2019), the development of multicultural competencies (White et al., 2018), and ultimately their academic functioning (e.g., García Coll et al., 1996).
Prior empirical work provides evidence of the unique and intersecting nature of youth cultural socialization niches. For instance, research assessing a combination of heritage (i.e., efforts to teach youth about their heritage and cultural background); national (i.e., efforts to teach youth about U.S. mainstream culture); and multicultural socialization identified multiple socialization niches with different degree (higher vs. lower levels) and consistency (similar vs. dissimilar) of socialization experiences across school and family settings (Byrd & Ahn, 2020). Similarly, research examining heritage and national cultural socialization separately also identified multiple socialization niches which varied in the degree and consistency of socialization experiences across peer and family settings (Wang & Benner, 2016). This work highlights the considerable heterogeneity in U.S. adolescent experiences of cultural socialization across school, peer, and family settings and the importance of using cross-setting, person-centered approaches to capture this variability (i.e., degree and consistency) regarding multicultural socialization.
Considering the variability in the degree of multicultural socialization experiences, some adolescents may be embedded in or negotiating higher multicultural socialization niches where they are frequently provided opportunities to learn about other cultures across these settings. In contrast, others may be part of lower multicultural socialization niches with little to no opportunity. Consistent with prior theoretical (Super & Harkness, 2002) and empirical work (Umaña-Taylor & Hill, 2020) highlighting the importance of frequent, ample socialization opportunities for youth to learn from and draw adjustment-related benefits from these experiences, it is likely that youth negotiating niches with higher multicultural socialization demonstrate more knowledge and awareness about other cultures and thus gain more academic-related benefits.
Importantly, beyond variability in the degree of multicultural socialization, variability can also emerge in the consistency (similarity vs. dissimilarity) of socialization messages and opportunities that youth experience across their proximal settings (Byrd & Ahn, 2020). Specifically, adolescents may negotiate niches where schools, peers, and families match in the content and degree of socialization efforts (cross-setting similarity) or in niches where there is a mismatch across these settings (cross-setting dissimilarity). Prior research underscores the importance of cross-setting similarity in youth cultural socialization experiences for their academic functioning (Wang & Benner, 2016). These studies, however, have not explicitly focused on multicultural socialization.
Findings from prior studies suggest that variability in both degree and consistency of cultural socialization experiences is important. Further, ecological models (e.g., García Coll et al., 1996) emphasize that school, peer, and family settings influence the kinds of transactions adolescents negotiate and that mutually reinforcing repetition of similar influences across these settings has important implications for youth adjustment (Super & Harkness, 2002). Thus, in multicultural socialization niches characterized by cross-setting similarity, adolescents may encounter comparable cross-setting messages and opportunities to learn about other cultures and are likely to experience more academic-related benefits from these socialization experiences in a cohesive niche. Conversely, given that socialization efforts reflect adaptive cultural models (White et al., 2018), adolescents who experience cross-setting dissimilarity may be exposed to competing affordances and demands in each of these settings, and this mismatch may likely diminish their understanding of the value of cultural diversity and thus may have a cost to their academic functioning.
A cross-setting, person-centered view of the adolescent niche may be particularly important for the current examination because multicultural socialization processes involve a degree of understanding that cultural diversity permeates all aspects of adolescents' lives in multicultural societies (Benet-Martínez & Hong, 2014). Further, adolescent academic functioning within schools ranging in ethnoracial diversity involves the ability to interact with and learn from individuals from diverse cultures, ethnicities, and races; to develop a sense of belonging amid cultural pluralism (Barrett, 2018); and to meet multiple demands, which may be, at times, competing with one another (Celeste et al., 2019).
Links Between Multicultural Socialization Niches Across Schools, Peers, and Families and Youth Adjustment
Multicultural socialization opportunities have been theorized to support youth overall development and adjustment (Barrett, 2018), including their academic functioning (Nishina et al., 2019). Further, empirical work supports these notions. Given the scarce literature focused on the link between multicultural socialization and youth academic functioning, this study draws from work capturing related types of cultural socialization experiences and links with different indicators of psychosocial adjustment. Most empirical work on multicultural socialization has focused on the school setting and provides support for the positive link between multicultural socialization and academic functioning. For instance, multicultural socialization has been associated with greater school belonging and with greater college satisfaction among a U.S. ethno-racially diverse sample of college students (Byrd, 2019). Further, in German adolescent samples, multicultural socialization has been directly (Schachner et al., 2021) and indirectly via youth heritage and national identities (Schachner et al., 2016) associated with positive psychosocial adjustment, including academic functioning. These studies highlight how multicultural socialization in schools, likely through teachers' efforts to promote positive intergroup contact (Karataş et al., 2023) and responses to ethnic-racial victimization (Bayram Özdemir & Özdemir, 2020), inform different indicators of academic functioning but do not consider the intersecting role of peer and family socialization settings.
During adolescence, peers become an important socialization setting providing youth opportunities to learn about themselves and others through various cultural socialization experiences (Eccles & Roeser, 2011). Ethnographic research reveals that adolescents and young adults engage in meaningful conversations about their heritage or ethnicracial identity development, racial inequality, and discrimination with their friends and peers (Moffitt & Syed, 2021;Syed & Juan, 2012). Further, peers play a role in the transmission of culture and in youth exploration and navigation of what it means to be a member of a particular ethnic, racial, or cultural group (Wang & Lin, 2023), as well as in the promotion of openness to cultural diversity and intergroup peer inclusion (Burkholder et al., 2021;Killen et al., 2022). Social network-informed studies provide additional evidence for peers playing a vital role in cultural socialization. This work shows that adolescents from the U.S and Northern Europe influence each other to become similar in terms of their attitudes toward intergroup relationships (Zingora et al., 2020) and anti-immigrant and xenophobic attitudes (Bohman & Kudrnáč, 2022;van Zalk & Kerr, 2014). Prior research underscores that friends and peers contribute to cultural socialization by shaping heritage or ethnic-racial Santos et al., 2017) and national identity development , and these have important implications for youth academic functioning (Safa et al., 2022). These studies elucidate the important socialization roles of friends and peers in adolescent social and academic development but have not focused on multicultural socialization efforts.
Given the increasingly salient role of peers as a primary socialization setting, this study theorizes that peer multicultural socialization would foster adolescent academic functioning. Specifically, peer efforts to support adolescents' understanding of cultures and ethnic-racial groups other than one's own and to foster positive intergroup contact may promote diversity and multicultural attitudes and skills, positive relationships with peers from different cultural backgrounds, and a sense of belonging in culturally plural academic settings (Nishina et al., 2019). These competencies are theorized to support adolescent academic functioning by providing youth with affective, behavioral, and cognitive tools to adequately respond to academic demands across multicultural settings (García Coll & Szalacha, 2004).
In the family setting, parents' (or caregivers') socialization processes aim to equip youth to thrive within specific (multi)culturally bounded contexts (Vélez-Agosto et al., 2017). Thus, parents participate in multiple socialization practices to achieve this goal (Bornstein & Lansford, 2010). For instance, parents engage in heritage culture socialization practices, and prior work has documented a positive link between youth's opportunities to learn about their heritage background and their academic functioning (Huynh & Fuligni, 2008;Wang et al., 2020). Further, parents also engage in bicultural socialization processes that provide youth opportunities to learn about their heritage and the national culture (Cheah et al., 2013;Kim & Hou, 2016). These parental bicultural socialization experiences have been documented to be positively linked with adolescent psychosocial and cognitive adjustment (Knight et al., 2016;Zhang et al., 2018), but these studies did not focus on academic functioning.
Although prior work has not examined multicultural socialization, it follows that family multicultural socialization would foster adolescent academic functioning. Specifically, by providing youth with opportunities to develop an understanding of cultures and ethnic-racial groups other than one's own, parents and caregivers help youth understand the challenges and opportunities of cultural pluralism and ethnic-racial socialization (for White children and youth, see Hazelbaker et al., 2022; for children and youth of color, see Rivas-Drake et al., 2022), internalize multiculturalism beliefs (Kim & Hou, 2016), and navigate everyday interactions with people from culturally diverse backgrounds (Neblett et al., 2012). These competencies may instill a greater sense of efficacy in navigating multicultural academic settings and demands (García Coll & Szalacha, 2004).
Schools, peers, and families are salient proximal settings comprising adolescent socialization niches (Super & Harkness, 2002). These settings work in tandem with one another, and their joint forces may promote or inhibit multicultural socialization goals and associated academicrelated benefits. Adolescents in cross-setting similar niches with higher degrees of multicultural socialization are theorized to reap the most benefits regarding their academic functioning, and empirical work supports this notion. Indeed, youth negotiating niches characterized by crosssetting similarity with higher levels of peer and family heritage or national cultural socialization demonstrated greater academic adjustment than youth in other niches characterized by either cross-setting similarity with lower socialization levels or cross-setting dissimilarity (i.e., higher parent socialization; lower peer socialization; Wang & Benner, 2016). Similarly, adolescents negotiating niches characterized by cross-setting similarity with higher degrees of socialization (i.e., a combination of higher heritage, national, and multicultural socialization across school and family settings) demonstrated greater academic engagement and aspirations than adolescents in niches characterized by either cross-setting similarity with lower socialization levels or cross-setting dissimilarity (Byrd & Ahn, 2020). Across these studies, there was no difference between the lowerlevel cross-setting similar niche and the cross-setting dissimilar niche in academic outcomes, highlighting the significance of both degree and consistency of cultural socialization experiences across settings. These findings underscore the importance of youth exposure to at least moderate socialization opportunities and the need to consider the nuanced ways in which the degree and consistency of adolescent socialization niches inform their academic functioning and adjustment. The current study relies on a cross-setting, person-centered approach (Bergman, 2001) to capture the variability that characterizes adolescent multicultural socialization niches across school, peer, and family settings and to examine how these unique niches inform adolescent academic functioning.
Indicators of Social Position and Multicultural Socialization Niches
Socialization efforts involve multidirectional, interactive processes between youth and their proximal settings (Umaña-Taylor et al., 2013). Gender, ethnicity or race, and parental nativity are key indicators of social position factors informing youth socialization processes and developmental pathways (Stein et al., 2016). Specifically, these factors may influence the affordances and demands youth encounter across school, peer, and family settings and their intersecting forces. In other words, these factors may inform the degree and consistency of multicultural socialization opportunities that youth experience across these settings.
Albeit limited, some research highlights the importance of considering the role of these social position indicators on youth cultural socialization niches. Specifically, prior work has documented that neither gender, ethnicity/race, or parental nativity (i.e., having at least one foreign-born parent) informed the types of heritage cultural socialization niches (peer and family settings) youth were negotiating (Wang & Benner, 2016). However, this research revealed that ethnicity/race and parental nativity informed youth's national cultural socialization niches. Particularly, Latinx adolescents were more likely than Black adolescents to be in the cross-setting dissimilar national cultural socialization niche. Similarly, adolescents with at least one foreign-born parent were more likely to be in the cross-setting dissimilar national cultural socialization niche than those with only U.S.-born parents. No differences were observed for gender (Wang & Benner, 2016). Additionally, prior work focused on a combination of heritage, national, and multicultural socialization niches has documented that Black adolescents compared to White adolescents were less likely to be in the cross-setting dissimilar socialization niche (Byrd & Ahn, 2020). No differences were observed for gender (Byrd & Ahn, 2020). These findings point to the interplay between youth's social position and cultural socialization experiences. Given the scarce research and that findings vary based on the type of socialization practices and indicators of social position, exploratory analyses were conducted to examine how gender, ethnicity/race, and parent nativity would inform the type of multicultural socialization niches youth are more likely to be negotiating.
Current Study
Prior research has rarely examined an increasingly salient socialization process in multicultural societies, namely multicultural socialization, among a U.S. ethno-racially diverse adolescent sample. First, the current study identified adolescents' multicultural socialization niches, defined by the degree and consistency of multicultural socialization experiences across school, peer, and family settings (Aim 1). Second, the study examined how these unique niches inform key markers of youth academic functioning (i.e., emotional and behavioral academic engagement; academic aspiration and expectations; Aim 2). Based on prior theoretical and empirical work, it was expected that several niches or profiles would emerge characterized by different types of degree and cross-setting consistency of multicultural socialization experiences (Hypothesis 1). Further, these unique niches were expected to have implications for youth academic functioning. Specifically, youth negotiating niches characterized by cross-setting similarity with higher levels of multicultural socialization were expected to demonstrate better academic functioning than youth in other niches (Hypothesis 2a). It was also expected that youth negotiating niches characterized by cross-setting dissimilarity with lower levels of multicultural socialization would demonstrate lower academic functioning than youth in other niches (Hypothesis 2b). Finally, based upon extant theory recognizing the influence of salient social position indicators (e.g., gender, ethnicity/race, and parental nativity) on youth socialization processes and developmental competencies, exploratory analyses examined whether these key indicators of social position informed the likelihood to be in a particular multicultural socialization niche (exploratory Aim 3).
Among Hispanic/Latinx youth in the study sample, 89.5% were of Mexican heritage, 2.5% were Puerto Rican, 1% were Salvadoran, and 6.8% were of another origin. A majority of participants (66.6%) were 3 rd generation (i.e., youth and parents born in the U.S.), 17.7% were considered 2.5 th generation (i.e., youth and one parent born in the U.S. and the other parent born abroad), 11.8% were 2 nd generation (i.e., youth born in the U.S. and both parents born abroad), and 4% were 1 st generation (i.e., youth and parents born abroad). Thus, 33.4% of the sample had at least one immigrant or foreign-born parent. Participants reported that their parents were married, never divorced (44.5%), divorced (25.1%), separated (12.2%), widowed (2%), single, never married (8%), or living together but never married (8%).
In terms of subjective appraisal of socioeconomic status, 24.3% of participants reported that they never had to worry about money, 38.2% stated that their family only had to worry about money for fun and extras, 35.1% reported they had just enough to get by, and 2.3% stated that they did not have enough to get by. Thirty-nine percent of participants reported receiving free or reduced-price lunch at school. Participants reported on their parents' educational levels. Maternal education levels were as follows: 10.4% had less than a high school diploma, 21.3% had a high school diploma or GED, 8.4% had an associate degree, 21.8% had completed some college, 17.1% had a college degree, and 20% had a professional degree (MA, PhD, JD, or MD). Paternal education levels were as follows: 15% had less than a high school diploma, 31.4% had a high school diploma or GED, 7.5% had an associate degree, 18.5% had completed some college, 12% had a college degree, and 15.7% had a professional degree (MA, PhD, JD, or MD).
Procedure
Participants were 6 th grade students from two public middle schools and 9 th grade students from two public high schools in a metropolitan city in the Southwestern U.S. Teachers provided all 6 th and 9 th grade students parental consent letters in English and Spanish to share with their parents/ caregivers. Students received $10 for returning their signed parental consent forms to teachers, regardless of their study participation decision. Teachers were provided $50 and two movie tickets for their efforts in reminding students to return consent forms. Participating students with signed parental consent forms provided assent before completing their surveys. Across the four schools, rates of consent ranged from 71% to 81%. These study procedures were approved by the Arizona State University's institutional review board (Protocol #8845).
Data collection took place in December 2019 and early January 2020. Participants completed self-reported questionnaires in English during their regular school hours over two class periods (approximately 90 minutes total). School staff and research project assistants were available to answer any questions as participants completed the survey.
School multicultural socialization
Youth rated the extent to which their schools provided opportunities for them to learn about ethno-racial groups and cultures other than their own and about the importance of cultural pluralism using the promotion of cultural competence subscale of the School Climate for Diversity Secondary Scale (Byrd, 2017). Prior work has provided support for the validity and reliability of this subscale among ethnoracially diverse adolescent samples (Byrd, 2017). Youth responded to 5 items (α = 0.92; e.g., "In school you get to do things that help you learn about people of different races and cultures") based on their experiences in the past six months, on a Likert-type scale from 1 (not true at all) to 5 (completely true). Raw item scores were recoded to a scale from 0 to 4 to match the scaling of peer and family multicultural socialization measures described below. The average of individual youth scores was calculated, with higher scores indicating greater school multicultural socialization.
Peer and family multicultural socialization
Youth were asked how often, in the past six months, their friends/peers and parents/caregivers engaged in efforts to teach them about ethno-racial groups and cultures other than their own and about the importance of equal treatment for people from all ethno-racial backgrounds. Youth responded to a total of six items adapted from the Cultural Socialization/Pluralism subscale of the Parents' Racial Socialization Scale (Hughes & Johnson, 2001). Prior work has provided support for the validity and reliability of this subscale among ethno-racially diverse adolescent samples (Nelson et al., 2018). Items assessed overt multicultural socialization from peers/friends (3-items; e.g., "Friends/peers talked to you about important people or events in the history of racial/ethnic groups other than your own?" or "Friends/ peers have done or said things to show you that all people are equal regardless of race/ethnicity?") and from parents/ caregivers (3-items; e.g., "Parents/Caregivers encouraged you to read books about other racial/ethnic groups?" or "Parents/Caregivers have done or said things to show you that all people are equal regardless of race/ethnicity?"). Response scale ranged from 0 (never) to 4 (very often). Mean scores were calculated for the friends/peers (α = 0.67) and for the parents/caregivers (α = 0.70) items, with higher scores indicating higher multicultural socialization. The terms peer multicultural socialization and family multicultural socialization will be used hereafter.
Academic functioning: Emotional and behavioral academic engagement
Youth reported on two key indicators of academic functioning, namely emotional (4 items; α = 0.90; e.g., "When we work on something in class, I feel interested") and behavioral academic engagement (6 items; α = 0.91; e.g., "When we work on something in class, I get involved") using the emotional and behavioral academic engagement subscales from the Engagement versus Disaffection with Learning Scale (Skinner et al., 2009). Prior work has provided support for the validity and reliability of these subscales among ethno-racially diverse adolescent samples (e.g., Martinez-Fuentes et al., 2021). Strong correlations between student and teacher reports and observations of academic engagement further support construct validity (Skinner et al., 2009). The response scale ranged from 0 (never) to 4 (all the time). Mean scores were calculated, with higher scores indicating higher emotional and behavioral academic engagement.
Academic functioning: Academic aspirations and expectations
Youth reported on two additional indicators of academic functioning: academic aspirations and expectations. Questions included how far they would like to go in school (aspirations) and how far they thought they would go in school (expectations). Responses ranged: 1 = some high school, 2 = high school graduate or GED, 3 = some college but no degree, 4 = graduate from a 2-year college, vocational, or technical school, or join the military, 5 = graduate from a four-year college, 6 = get an MS/MA, 7 = get a professional degree). Students aspired to attain between a 4-year and a Master's degree (M = 5.24, SD = 1.55) and expected to attain between a 2-year and a 4-year degree (M = 4.47, SD = 1.66).
Social position indicators and other covariates
Youth reported on their parents' nativity or family immigrant status (coded as 1 = at least one parent born abroad; 0 = both parents born in the U.S.), their gender (1 = girl; 0 = boy [other cases were coded as missing]), and their ethnicity-race (coded using a series of dummy codes). Students who chose multiple ethnic-racial categories were coded as Multiethnic. School site was treated as a control variable to account for the nested structure of the data and was coded using a series of dummy-codes. Grade was not included as a control because school site was reflective of adolescents' grade level.
Data Analysis Plan
Descriptive statistics and bivariate correlations among study variables were examined prior to multivariate analyses. All endogenous variables were normally distributed and did not have outliers. To test study aims, latent profile analyses (LPA) were conducted in Mplus 8.1 (Muthén, 2004). Across study variables, there were 0% to 8% missing values. To handle the minimal missing data, since imputation is not appropriate for analytical approaches such as LPA, which assumes multiple underlying populations, fullinformation maximum likelihood (Aims 1 and 2; n = 704-682) and listwise deletion (i.e., exploratory Aim 3; n = 659) were used. Independent sample t-tests revealed no differences between excluded and kept cases on key study variables.
In Aim 1, this study relied on a person-centered approach (Bergman, 2001) to estimate the latent profiles of U.S. adolescents' multicultural socialization niches using mean scores of school, peer, and family multicultural socialization as indicators (Suzuki et al., 2021). Specifically, nondiagonal class invariant models, which allow for correlated indicators, were estimated. Solutions with up to 7 profiles were examined and the best-fitting model was selected based on the following criteria: smaller Akaike Information Criterion (AIC; Bozdogan, 1987); Bayesian Information Criterion (BIC; Schwarz, 1978); and samplesize adjusted BIC (aBIC; Sclove, 1987); a significant bootstrapped likelihood ratio test (BLRT; Masyn, 2013) for model K and non-significant BLRT for model K+1, a Bayes Factor (BF; Masyn, 2013) of three at a minimum to indicate at least moderate evidence for Model K compared to Model K+1; a large approximate correct model probability (cmP; Masyn, 2013) indicating the probability of a given model being correct out of all fitted models; appraisal of the smallest profile size (Ferguson et al., 2020); and conceptual interpretability of the profiles (Tofighi & Enders, 2008;Weller et al., 2020). Entropy was evaluated as a measure of class categorization but was not used as an indicator in the class enumeration stage (Nylund-Gibson & Choi, 2018).
In Aim 2, the association between the identified latent profile solution and youth academic functioning (i.e., emotional and behavioral academic engagement; academic aspirations and expectations) was examined using the DU3STEP distal outcomes method (Asparouhov & Muthén, 2014;Bakk & Vermunt, 2015). Specifically, Wald tests or mean difference comparisons were estimated between the outcome means (e.g., emotional academic engagement) for each set of pairs (e.g., profile 1 and 2) in the profile solution identified in Aim 1 while accounting for classification error. Significant Wald tests suggest significant mean level differences across the compared profiles.
In exploratory analyses for Aim 3, the associations between social position indicators (i.e., gender, ethnicity/ race, and parental nativity) and the identified latent profile solution were examined using the automatic R3STEP threestep approach (Asparouhov & Muthén, 2014;Vermunt, 2010), which estimates multinomial logistic regressions assessing the probability of being in one profile over another. Specifically, the profile solution identified in Aim 1 was regressed on the examined social position variables while accounting for profile classification error and school site. Odds ratio estimates indicate how each of the social position variables are comparatively related to the estimated profiles. Of note, due to the small sample size, cases in which adolescents identified as other gender (1.1%), AAPI (1.4%), AI/AN (1.4%), or AMENA (1%) were omitted from these analyses; therefore, only adolescents who identified as boys, girls, Latinx, Black, Multiethnic, or White were included in these exploratory analyses. For binary indicators, higher social position was coded as 0 (e.g., 1 = girls, 0 = boys). For non-binary indicators, higher social position was identified as the reference group. For example, White adolescents were identified as the reference group given their higher social position status in U.S. society (Loyd & Gaither, 2018) and documented low-levels of related cultural socialization experiences (Abaied & Perry, 2021). Table 1 includes descriptive statistics and bivariate correlations for the study variables. School, peer, and family multicultural socialization were positively correlated with one another. Multicultural socialization across settings was positively correlated with emotional and behavioral academic engagement. Family multicultural socialization was positively correlated with academic aspirations and expectations.
Aim 1: Multicultural Socialization Niches
The six-profile LPA solution was selected because it was the best-fitting model with good interpretability based on theory (Bronfenbrenner & Morris, 2006;Super & Harkness, 2002; Table 2). Specifically, the six-profile solution had lower AIC, BIC, and adjusted BIC values and a higher cmP value than all the other solutions. Further, compared to the seven-profile solution, the six-profile solution had a statistically significant BLRT value and a BF value larger than 10, providing further evidence for the six-profile LPA solution as the best-fitting model. Supporting Hypothesis 1, the six identified profiles or niches were characterized by different types of degree and cross-setting consistency of multicultural socialization experiences. In this model (see Fig. 1), three niches (Niches 4, 5, 6) demonstrated relatively similar patterns across school-peer-family (i.e., mean level differences across settings within each niche were not greater than 0.51), indicating cross-setting similarity in multicultural socialization experiences within each niche but these three niches ranged in the degree or level to which youth were afforded socialization opportunities (i.e., lower to higher). Roughly 25% of adolescents were in the crosssetting similar lower socialization niche (Niche 4; n = 177), which was characterized by lower levels (i.e., mean levels between 1.98 and 2.45) of school-peer-family multicultural socialization. Approximately 7% of adolescents were in the cross-setting similar moderate socialization niche (Niche 5; n = 52), which was defined by moderate levels (i.e., mean levels between 2.44 and 2.87) of school-peer-family multicultural socialization. Only 4% of adolescents were in the cross-setting similar higher socialization niche (Niche 6; n = 29), which was characterized by the highest levels (i.e., mean levels between 3.26 and 3.77) of school-peer-family multicultural socialization. This profile represents a meaningful group that also emerged in the other profile solutions (e.g., Appendix A).
The remaining three niches (Niches 1, 2, 3) demonstrated relatively dissimilar patterns across school-peer-family (i.e., mean level differences across at least two contrasting settings within each profile were greater than 0.69) indicating cross-setting dissimilarity, which ranged in the type of contrast in multicultural socialization experiences across each of the settings (e.g., greater mean level differences were found in the school setting vs. the other settings) within niches and the degree or level to which youth were afforded socialization opportunities (i.e., lower to moderate). A large proportion (41%) of adolescents were in the cross-setting dissimilar peer contrast socialization niche (Niche 1; n = 286), which was characterized by greater dissimilarities in multicultural socialization experiences between peer (M = 1.15) and the other settings (M School = 2.02; M Family = 1.84) ranging between very low to low levels of multicultural socialization across settings. Roughly 6% adolescents were in the cross-setting dissimilar greater peer contrast socialization niche (Niche 2; n = 41), which was characterized by the highest dissimilarity in multicultural socialization experiences between peer (M = 0.31)
Aim 2: Associations Between Multicultural Socialization Niches and Academic Functioning
The second research aim examined how the multicultural socialization niches identified in Aim 1 were related to markers of academic functioning (Table 3). Supporting Hypothesis 2a, mean comparisons for the six niches revealed that emotional academic engagement was significantly higher for adolescents negotiating the crosssetting similar higher socialization niche (Niche 6) compared to adolescents negotiating the other five niches (Niches 1 through 5). Similarly, behavioral academic engagement was also greater for adolescents in this niche compared to adolescents in all other niches, except those in the cross-setting dissimilar greater peer contrast niche (Niche 2). Further, emotional academic engagement was significantly higher for adolescents negotiating the cross- AIC Akaike Information Criterion, BIC Bayesian Information Criterion, aBIC Adjusted BIC, BLRT Bootstrapped Likelihood Ratio Test, BF Bayes Factor, cmP Approximate Correct Model Probability. Lower AIC, BIC, and aBIC values and a BF value that is large and at least 3 represent better fit. A larger cmP value represents a larger probability of a given model is correct out of all fitted models. A significant BLRT for model K and non-significant BLRT for model K+1 indicates that the model with K profiles is a better fitting model. Boldface indicates the solution that was selected as the best fitting model a Counts and proportions for the smallest latent profile are based on the estimated model's probabilistic likelihood of profile membership setting similar moderate socialization niche (Niche 5) compared to adolescents negotiating the cross-setting dissimilar peer contrast (Niche 1) and school contrast (Niche 3) multicultural socialization niches.
Partially supporting Hypothesis 2b, mean comparisons revealed that adolescents negotiating the cross-setting dissimilar school contrast socialization niche (Niche 3), which was also characterized by the lowest levels of multicultural socialization in each of the settings, demonstrated lower emotional academic engagement compared to adolescents negotiating the cross-setting similar higher (Niche 6) and moderate (Niche 5) socialization niches. These adolescents also showed lower behavioral academic engagement compared to adolescents negotiating the cross-setting similar higher (Niche 6) and dissimilar greater peer contrast (Niche 2) socialization p < 0.10, * p < 0.05, ** p < 0.01, *** p < 0.001 niches; lower academic expectations compared to adolescents in the cross-setting similar higher (Niche 6) and lower (Niche 4) socialization niches. Similarly, behavioral academic engagement was also lower for adolescents negotiating the cross-setting dissimilar peer contrast socialization niche (Niche 1), which was characterized by the second lowest levels of multicultural socialization in each of the settings, compared to adolescents in the crosssetting dissimilar greater peer contrast niche (Niche 2). No significant differences in academic aspirations were found across niches.
Aim 3 Exploratory Analyses: Social Position Predictors of Multicultural Socialization Niches
Exploratory analyses examined the associations between salient social position indicators (i.e., gender, ethnicity/race, and parental nativity) and the identified multicultural socialization niches (Table 4; see Appendix B for a description of the niches by social position indicators) while accounting for school site. Findings from multinomial logistic regression analyses indicated that social position was a significant predictor of profile membership above and beyond any school site effects. In terms of gender, girls had lower odds or were less likely than boys to be classified in the cross-setting dissimilar peer contrast (Niche 1) and school contrast (Niche 3) socialization niches and in the cross-setting similar lower socialization niche (Niche 4) compared to the cross-setting similar moderate socialization niche (Niche 5). Similarly, girls had higher odds than boys to be in the cross-setting similar moderate niche (Niche 5) compared to the cross-setting similar higher socialization niche (Niche 6). Taken together, these findings suggest that girls were more likely to negotiate the crosssetting similar moderate socialization niche (Niche 5) compared to most niches (i.e., Niches 1, 3, 4, 6). Turning to ethnicity/race, compared to White adolescents, Latinx and Multiethnic adolescents had higher odds to be in the cross-setting dissimilar peer contrast socialization niche (Niche 1) and in the cross-setting similar lower (Niche 4) and moderate (Niche 5) socialization niches compared to the cross-setting similar higher socialization niche (Niche 6). They were also less likely to be in the cross-setting dissimilar school contrast socialization niche (Niche 3) compared to the cross-setting similar moderate socialization niche (Niche 5). Compared to White youth, Latinx adolescents were more likely to be in the crosssetting dissimilar greater peer contrast niche (Niche 2) than the cross-setting similar higher socialization niche (Niche 6). There were no differences in the likelihood of profile membership between White and Black adolescents. Together, these findings indicate that ethnicity/race informs the multicultural socialization niches that adolescents negotiate on regular basis, particularly for Latinx and Multiethnic youth when compared to White youth.
In terms of parental nativity, adolescents with at least one immigrant parent (i.e., foreign-born) had lower odds or were less likely than adolescents with U.S.-born parents to be classified in the cross-setting dissimilar peer contrast (Niche 1) and school contrast (Niche 3) socialization niches compared to the cross-setting similar moderate socialization niche (Niche 5) and less likely to be in the cross-setting dissimilar peer contrast socialization niche (Niche 1) compared to the cross-setting similar higher socialization niche (Niche 6). Taken together, these findings suggest that adolescents with immigrant parents were less likely to negotiate cross-setting dissimilar niches (i.e., Niches 1, 3) compared to cross-setting similar niches (i.e., Niches 5, 6).
School site was treated as a control variable. Overall, there were no differences in the likelihood of profile membership between adolescents in School 4 (largest sample size) and those in the other three schools. However, compared to School 4, adolescents attending School 2 (smallest sample size) had lower odds or were less likely to be negotiating the cross-setting dissimilar peer contrast (Niche 1) and cross-setting similar lower (Niche 4) socialization niches compared to the cross-setting similar higher socialization niche (Niche 6).
Discussion
Cultural diversity characterizes many parts of the world (Pew Research Center, 2019, April 22) and has important implications for the development and adjustment of youth from all ethno-racial groups (Berry et al., 2022), particularly for their academic adjustment as youth increasingly attend ethno-racially diverse schools (Nishina et al., 2019). Nevertheless, there is scarce research on how socialization processes equip youth to respond to increasing multicultural demands and the degree to which these socialization experiences inform youth academic functioning. This study addressed this gap by examining multicultural socialization niches across key proximal settings (i.e., schools, peers, and families) and their links with youth academic functioning. Consistent with ecological models highlighting unique contexts of development (e.g., Bronfenbrenner & Morris, 2006), this study identified a range of multicultural socialization niches that adolescents regularly negotiate: crosssetting similar higher, moderate, and lower socialization niches and cross-setting dissimilar peer contrast, greater peer contrast, and school contrast socialization niches. Most adolescents were negotiating niches in which they were afforded lower and/or dissimilar multicultural socialization opportunities (Aim 1). Further, findings suggest that contextual diversity matters as both the degree and consistency characterizing youth multicultural socialization niches had implications for their academic functioning, particularly more cohesive niches seemed the most beneficial (Aim 2). In line with theoretical notions underscoring the role that social stratification mechanisms play on youth development (e.g., García Coll et al., 1996), results from exploratory analyses suggest that indicators of youth social position may also shape the multicultural socialization niches that adolescents navigate (Aim 3).
School-Peer-Family Multicultural Socialization Niches (Aim 1)
Building on ecological models that consider the unique, intersecting nature of the developmental contexts youth regularly negotiate (e.g., Bronfenbrenner & Morris, 2006), this study examined variability in adolescent multicultural socialization niches relative to the degree and consistency of multicultural socialization experiences afforded to them across school, peer, and family settings. This approach recognizes the importance of these settings across adolescence (Eccles & Roeser, 2011) and their substantial interactions (Bronfenbrenner & Morris, 2006). Further, it acknowledges that socialization niches emerge from adaptive cultural models reflecting societal and individual values (White et al., 2018) and are influenced by the beliefs and practices that characterize a given setting (Super & Harkness, 2002). Six distinct socialization niches were identified using a person-centered approach (Bergman, 2001) and supporting Hypothesis 1. Consistent with prior work focused on related types of cultural socialization experiences (i.e., heritage and national cultural socialization; combination of cultural socialization experiences) across school and family settings (Byrd & Ahn, 2020) and across peer and family settings (Wang & Benner, 2016), findings from the current study highlight that U.S. adolescents from multiple ethno-racial backgrounds are negotiating a diverse range of multicultural socialization niches that vary in the degree and consistency/similarity in socialization experiences across school, peer, and family settings. Three niches demonstrated cross-setting similarity and ranged in the degree or level youth were afforded socialization opportunities. Greater levels of school-peer-family multicultural socialization characterized the cross-setting similar higher socialization niche compared to the crosssetting similar moderate and lower socialization niches, characterized by moderate and lower levels of multicultural socialization, respectively. There was a lot of variability in the number of adolescents negotiating each of these niches. Specifically, the cross-setting similar lower socialization niche represented a quarter of adolescents. In contrast, the cross-setting similar moderate and higher socialization niches included only seven and four percent of adolescents, respectively.
Versus, Coef. Coefficient, SE standard error, OR odds ratio. Boldface represents significant estimates (p < 0.05) indicating that a given social position indicator was a significant predictor of profile membership across compared niches, specifically estimates reflect the effects of the predictors on the likelihood of membership into the first versus second listed niche. Niche 1: Cross-setting dissimilar peer contrast socialization; Niche 2: Cross-setting dissimilar greater peer contrast socialization; Niche 3: Cross-setting dissimilar school contrast socialization; Niche 4: Crosssetting similar lower socialization; Niche 5: Cross-setting similar moderate socialization; and Niche 6: Cross-setting similar higher socialization a Girl (1 = girl; 0 = boy) b Immigrant parent/s (1 = at least one parent born abroad; 0 = both parents born in the U.S.) c White was coded as the reference group across model comparisons examining the role of ethnicity/race d School 4 (largest sample size) was coded as the reference group across model comparisons and treated as a control variable to account for the nested structure of data † p < 0.10, * p < 0.05, ** p < 0.01 Across these cross-setting similar niches, adolescents likely encounter comparable cross-setting messages and opportunities to learn about and treat with respect members of multiple cultures. Further, adolescents may be able to draw additional benefits from socialization experiences taking place in cohesive niches, particularly when given ample socialization opportunities, because the mutually reinforcing repetition of similar influences occurring across school-peer-family settings can better support youth in internalizing these messages and developing multicultural competencies (Super & Harkness, 2002). Albeit small, the cross-setting similar higher and moderate socialization niches represent important niches. Indeed, prior work has documented comparable niches and proportions. For instance, work on related types of cultural socialization (i.e., heritage and national cultural socialization; combination of cultural socialization experiences) has documented the significance of frequent, cross-setting similar cultural socialization experiences across school and family settings (Byrd & Ahn, 2020) and across peer and family settings (Wang & Benner, 2016). Consistent with the current study, this work also found the cross-setting similar higher niches to represent small proportions of their samples (Byrd & Ahn, 2020), perhaps because historical assimilationist practices in U.S. schooling and other settings make it unlikely to observe high and similar levels of multicultural socialization across settings (Urrieta & Machado-Casas, 2013). Three niches demonstrated cross-setting dissimilarity which ranged in the type of cross-setting contrast and the degree to which youth were afforded socialization opportunities. The cross-setting dissimilar school contrast socialization niche was characterized by greater dissimilarities between the multicultural socialization experiences afforded to youth in the school setting compared to the peer and family settings and demonstrated the lowest levels of cross-setting multicultural socialization of all niches. The other two niches, the cross-setting dissimilar peer contrast and greater peer contrast socialization niches were characterized by larger dissimilarities between the multicultural socialization experiences provided to youth in the peer setting compared to the school and family settings. In the former, however, the contrast was lower, and cross-setting multicultural socialization experiences ranged from very low to low. In the latter, the contrast was higher and crosssetting multicultural socialization experiences ranged from very low to moderate levels. The cross-setting dissimilar peer contrast niche was the largest niche representing 41 percent of adolescents whereas the cross-setting dissimilar greater peer contrast and school contrast niches included a six and a 17 percent of adolescents, respectively.
In these cross-setting dissimilar niches, adolescents are likely exposed to competing messages across educators, peers, and caregivers regarding the importance of cultural pluralism and equal treatment of members of all ethno-racial groups. Conflicting messages may diminish adolescents' ability to develop multicultural competencies (Ward & Szabó, 2023). Further, this lack of cohesiveness across salient developmental settings may prove affectively, behaviorally, and cognitively taxing (Safa et al., 2019); thus, may reduce the benefits adolescents can draw from these socialization experiences. Indeed, prior work on related types of cultural socialization (i.e., heritage and national cultural socialization; combination of cultural socialization experiences) has documented developmental costs of dissimilar cultural socialization experiences across school and family settings (Byrd & Ahn, 2020) and across peer and family settings (Wang & Benner, 2016) to adolescent psychosocial adjustment.
It is not surprising that most adolescents in the current sample are negotiating cross-setting dissimilar or crosssetting similar lower multicultural socialization niches. Indeed, creating a harmonious, culturally plural society where people from all ethno-racial groups are valued and treated equally is a desirable (Deaux & Verkuyten, 2014) but complex goal (Berry et al., 2022). Further, many current U.S. state policies (e.g., HB 3979 in Texas, SB 1070 in Arizona) are inconsistent with multiculturalism values and the effects of these policies trickle down to the proximal settings youth navigate, including schools and families (Santos et al., 2018). Finally, socialization agents such as teachers (Chahar Mahali & Sevigny, 2022) and parents (Anderson & Stevenson, 2019) often report not being equipped to provide multicultural socialization opportunities to youth in rapidly changing settings within diverse communities. Thus, the proportion of adolescents negotiating the identified niches may exemplify constraints faced by schools, peers, and families in aligning values and goals related to multiculturalism. These constraints are likely imposed by historical systems and derivatives of social stratification including racism, discrimination, and segregation (García Coll et al., 1996).
Multicultural Socialization Niches and Youth Academic Functioning (Aim 2)
In line with ecological models underscoring the intersecting influence of proximal contexts on youth adjustment (e.g., Bronfenbrenner & Morris, 2006), this study examined the role of youth multicultural socialization niches on their academic functioning. Supporting Hypothesis 2a, youth negotiating niches characterized by cross-setting similarity with relatively higher levels or degree of multicultural socialization demonstrated better academic functioning than youth in other niches. Specifically, adolescents negotiating the cross-setting similar higher socialization niche had greater emotional academic engagement than adolescents in the other five niches. These adolescents also demonstrated higher behavioral academic engagement than adolescents in all other niches except those in the cross-setting dissimilar greater peer contrast niche. In addition, adolescents negotiating the cross-setting similar moderate socialization niche also demonstrated higher emotional academic engagement than adolescents negotiating the cross-setting dissimilar peer contrast and school contrast socialization niches. These findings highlight the importance of crosssetting similarity and moderate-to-higher levels of multicultural socialization for adolescent academic engagement. These results are consistent with theoretical notions underscoring the adjustment-related benefits of cohesiveness (Bronfenbrenner & Morris, 2006) and mutually reinforcing repetition (Super & Harkness, 2002) across adolescent proximal contexts of development.
It is likely that adolescents negotiating cohesive niches with at least moderate levels of multicultural socialization are afforded relatively consistent and frequent opportunities across schools, peers, and families. These opportunities can help youth to understand the benefits and challenges of cultural pluralism (Berry et al., 2022), to negotiate everyday interactions with people from culturally diverse backgrounds (Neblett et al., 2012), to develop a sense of belonging in culturally plural settings (Nishina et al., 2019), and to gain other multicultural competencies needed to navigate culturally diverse academic settings and demands (García Coll & Szalacha, 2004). Thus, the socialization opportunities afforded to youth in these moderately and highly multicultural socialization niches can foster the development of self-concept and skills to successfully navigate increasingly diverse educational settings and demands (Saleem & Byrd, 2021), which can bolster their behavioral and academic engagement in schools.
The current study extends prior work documenting the importance of adolescent cross-setting similar higher cultural socialization (e.g., heritage, national, or a combination) niches for their academic adjustment (peer and family settings; Wang & Benner, 2016) and academic engagement and aspirations (Byrd & Ahn, 2020) by focusing on the interactive influence of three key proximal contexts during adolescence (i.e., schools, peers, and families) on a less studied but increasingly salient socialization process, namely multicultural socialization. Nevertheless, findings should be interpreted cautiously as they present limited evidence of the benefits of cross-setting similar higher and moderate multicultural socialization niches in a small proportion of the sample. In addition, the fact that there were no differences in emotional academic engagement between youth in the cross-setting similar higher and cross-setting dissimilar greater peer contrast niches, the latter was the cross-setting dissimilar niche with the highest levels of family multicultural socialization, may suggest that multicultural socialization opportunities taking place in the family setting are particularly promotive of youth academic engagement. More work is needed to understand the benefits of cross-setting consistency and the optimal degree of multicultural socialization experiences within specific settings for youth academic functioning.
Hypothesis 2b was partially supported as adolescents negotiating the cross-setting dissimilar school contrast socialization niche, which was the niche characterized by cross-setting dissimilarity and lowest levels of multicultural socialization, demonstrated lower academic functioning than adolescents in some of the other niches. Importantly, most differences emerged between this niche and the crosssetting similar socialization niches. Specifically, youth in the cross-setting dissimilar school contrast socialization niche demonstrated lower emotional (vs. cross-setting similar higher and moderate socialization niches) and behavioral (vs. cross-setting similar higher socialization niche) academic engagement and lower academic expectations (vs. cross-setting similar higher and lower socialization niches). Comparisons with the other cross-setting dissimilar niches revealed that adolescents in this niche showed lower behavioral academic engagement than adolescents negotiating the cross-setting dissimilar greater peer contrast socialization niche. Relatedly, adolescents negotiating the cross-setting dissimilar peer contrast socialization niche, characterized by the second lowest levels of cross-setting socialization, also demonstrated lower behavioral academic engagement than adolescents in the cross-setting dissimilar greater peer contrast niche. Consistent with theoretical notions (Bronfenbrenner & Morris, 2006), these findings exemplify the cost of lack of cohesiveness across youth developmental contexts and of fragmented multicultural socialization opportunities for adolescent academic engagement and expectations. Of note, prior work on related types of cultural socialization (i.e., heritage and national cultural socialization; combination of cultural socialization experiences) did not find any differences in academic functioning between adolescents in the cross-setting dissimilar niches and those in the lower-level cross-setting similar niches (e.g., Byrd & Ahn, 2020). However, findings from the current study suggest that inconsistency combined with lower levels of multicultural socialization is most detrimental to youth academic engagement and expectations. Future work should continue to examine the developmental implications of contrasting socialization experiences across settings involving different degrees of socialization efforts.
It is likely that adolescents negotiating dissimilar niches with lower levels of multicultural socialization are afforded scarce and/or conflicting opportunities across schools, peers, and families to learn about the importance of cultural pluralism and equal treatment for members of all ethno-racial groups. Infrequent and inconsistent opportunities may result in limited opportunities for youth to develop behavioral, cognitive, and social skills to navigate ethno-racially diverse settings and a lack of efficacy in responding to multicultural demands (Wang & Benner, 2016). Further, these adolescents may engage in substantial efforts to reconcile inconsistent messages and to alter their behaviors to meet the demands of specific settings which could prove behaviorally, cognitively, and socially taxing (Safa et al., 2019), and this, in turn, may reduce their academic functioning (Safa et al., 2022).
Notably, contextual diversity of the multicultural socialization niches captured in this study informed youth emotional and behavioral academic engagement. However, the range of diversity in consistency and degree of cross-setting multicultural socialization opportunities minimally informed their academic expectations and did not inform their academic aspirations at all. It is likely that youth social position including socioeconomic status and parental education constrained the benefits of multicultural socialization opportunities for youth academic aspirations and expectations. Indeed, prior work has documented that indicators of social position such as parent educational attainment have important implications for youth's educational aspirations and expectations because they provide youth with funds of knowledge and opportunities to aspire and pursue their academic goals (e.g., Lui et al., 2014). Alternatively, the development of self-concept and skills that adolescents gain from multicultural socialization opportunities may not directly inform their educational aspirations and expectations while academic socialization across multiple settings, or efforts to prepare youth to attend and thrive in educational settings, often emerges as an important resource in raising youth's educational aspirations and expectations (Chun & Devall, 2019). Future work should continue to examine how different types of socialization including multicultural and academic socialization inform adolescent beliefs in their ability to realize their educational aspirations and, eventually, reach their educational goals.
In sum, schools, peers, and families are salient proximal developmental contexts comprising adolescent multicultural socialization niches. These contexts work in tandem with one another, and their joint forces may promote or inhibit multicultural socialization goals and associated academicrelated benefits. Adolescents negotiating more cohesive niches with higher degrees of multicultural socialization seem to reap the most benefits, specifically they demonstrated higher behavioral and emotional academic engagement. Conversely, there was partial evidence that adolescents negotiating dissimilar niches with lower degrees of multicultural socialization seem to reap the least benefits for their academic functioning. Overall, findings underscore the importance of both consistency and degree of multicultural socialization experiences and suggest that these benefits do not extend to all indicators of academic functioning.
Social Position Indicators of Multicultural Socialization Niches (Exploratory, Aim 3)
Based upon extant theory recognizing the influence of salient social position indicators (e.g., gender, ethnicity/ race, and parental nativity;García Coll et al., 1996) on youth socialization processes, exploratory analyses examined whether these key indicators of social position shape the multicultural socialization niches youth were negotiating while accounting for school site. Findings indicated social position informed the degree and consistency of multicultural socialization opportunities that youth experience across these settings. Of note, school site was not a significant predictor with two exceptions: compared to School 4, adolescents attending School 2 were less likely to negotiate cross-setting dissimilar peer contrast and crosssetting similar lower socialization niches than the crosssetting similar higher socialization niche. Regarding gender, girls were more likely than boys to negotiate the crosssetting similar moderate socialization niche compared to most niches. This finding may suggest that girls are more likely than boys to receive consistent and frequent multicultural socialization messages across schools, peers, and families. This finding stands in contrast with prior work that has documented that gender does not inform the types (e.g., degree and consistency) of cultural socialization niches that adolescents negotiate (heritage, national, combination of cultural socialization; Byrd & Ahn, 2020;Wang & Benner, 2016). The current study's findings, which focuses on multicultural socialization niches across three settings, are consistent with other work suggesting that girls are more likely than boys to seek cultural socialization experiences (Huynh & Fuligni, 2008) and indicating that girls are often considered the carriers of culture (Umaña-Taylor et al., 2009). Thus, girls may seek out multicultural socialization experiences across settings because they have a greater awareness of cultural influences and diversity.
Findings indicated that ethnicity/race also informed the multicultural socialization niches that adolescents negotiate on a regular basis. Specifically, Latinx and Multiethnic youth shared the same multicultural socialization niches as White youth. However, it is important to note that most Multiethnic youth (67%) identified Hispanic/Latinx as one of their ethnic-racial identities. Further, the Latinx population is the second largest ethnic-racial group (White is the largest group) in the Southwest city where the study took place. Therefore, the sample composition combined with the establishment of the Latinx population in this region of the country may explain the similarities found across Latinx and Multiethnic youth compared to White youth. Further, prior research has documented that Latinx parents highly endorse socialization values of diversity, such as talking to their children about cultural differences (Ayon, 2018), these values likely inform the multicultural socialization opportunities afforded to youth within the family setting which intersects with other settings.
No differences were found across niches for Black youth compared to White youth. These findings are consistent with prior work on heritage and national cultural socialization niches (Wang & Benner, 2016). However, they differ from prior research documenting that Black adolescents were less likely than White adolescents to negotiate a crosssetting dissimilar socialization niche involving a combination of cultural socialization opportunities and experiences of discrimination (Byrd & Ahn, 2020). Given this prior study focused on socialization and discrimination, their findings are in line with theoretical work documenting the pervasive impact of exposure to discrimination and colorism for Black youth (e.g., García Coll et al., 1996) and research highlighting the importance of ethnic-racial socialization that involves coping with discrimination (Anderson & Stevenson, 2019). Future work should continue to examine the role of ethnicity/race in different types of cultural socialization.
Regarding parental nativity, findings indicated that adolescents with at least one immigrant parent were less likely than youth with no immigrant parents to negotiate cross-setting dissimilar niches than cross-setting similar niches. Thus, these findings suggest that youth with immigrant parents may be more likely to develop in niches in which schools, peers, and families have more alignment in values and goals related to multiculturalism. Adolescents developing in families with immigrant parents, who have a closer generational connection to their heritage culture due to their family's relatively more recent immigration, experience different affordances and demands (e.g., serving as language brokers or providing host country culture socialization) that inform their multicultural socialization experiences within the family setting (Safa et al., 2022) and beyond (Safa et al., 2019). These adolescents may seek out multicultural socialization experiences across settings because they have greater awareness of cultural affordances and demands. Prior work, however, has documented that parental nativity informed the types of national culture socialization niches that youth navigate but did not inform their heritage culture socialization niches. Specifically, adolescents with at least one immigrant parent were more likely to be in the cross-setting dissimilar national cultural socialization niche than those without immigrant parents (Wang & Benner, 2016). Findings from prior work and the current study suggest that parental nativity may differentially inform multiple cultural socialization opportunities. More work is needed in this area.
In sum, indicators of social position shape the multicultural socialization niches that adolescents navigate. Findings indicated that girls and youth with immigrant parents were more likely to negotiate more cohesive niches with relatively higher degrees of multicultural socialization opportunities than their counterparts. Further, Latinx and Multiethnic adolescents were more likely to negotiate the same niches than White youth, but no differences were observed between Black and White youth. Taken together, these findings exemplify the role of the affordances and demands youth experience based on their social position and underscore the importance of examining how intersectional identities may relate to multicultural socialization niches (Priest et al., 2014).
Developmental and Applied Implications
The study findings have important theoretical, translational, and practical implications. Building on ecological models (e.g., García Coll et al., 1996), the current study highlights the transactional nature of youth development and adjustment by providing evidence that indicators of social position can shape youth's context of development and that contextual diversity in multicultural socialization experiences can inform adolescent academic functioning. Furthermore, this study provides evidence of youth's challenges in culturally diverse societies where multiculturalism values have not been widely adopted (Berry et al., 2022). Indeed, most adolescents in the current sample were negotiating niches in which they were afforded inconsistent and/or lower multicultural socialization opportunities suggesting that most adolescents have not received enough opportunities to develop multicultural competencies across three of their main proximal contexts of development.
Albeit in a small proportion of the sample, the importance of consistency and at least moderate degrees of multicultural socialization was also evident. These findings point to potential intervention targets to enhance youth's academic functioning in ethno-racially diverse societies. Recently, promotion efforts to provide youth with opportunities to learn about their own and other's ethnic-racial groups and cultural heritages have emerged (e.g., Dziedziewicz et al., 2014;Stein et al., 2021;Umaña-Taylor et al., 2018), but these efforts are often focused on increasing socialization opportunities in one particular context like school (e.g., Umaña-Taylor et al., 2018) or family (e.g., Stein et al., 2021). The study's results suggest that such programs may be most effective when multiple socializing contexts are involved. Indeed, the cross-setting dissimilar school contrast and peer contrast socialization niches were associated with lower emotional or behavioral academic engagement. This suggests that youth might need additional support to make sense of the fragmented and inconsistent messages they receive in their schools, peers, and families. Thus, given the need for successful navigation of increasingly ethno-racially diverse school contexts (Nishina et al., 2019), these findings point to the meaningful role of engaging with multiple socialization contexts to promote youth academic functioning, particularly emotional and behavioral academic engagement. Cohesive niches with frequent multicultural socialization experiences might promote youth's multicultural competencies, including engagement with ethno-racially diverse peers in academic settings, with benefits for their academic functioning (Schachner et al., 2016;Schachner et al., 2021). Future research on promoting multicultural competence should consider the relative degree and consistency of youth multicultural socialization experiences between school and other key proximal contexts (Barrett, 2018;Dee & Penner, 2017).
Limitations and Future Directions
The use of cross-setting, person-centered analyses in a relatively large racially and ethnically diverse sample of early and middle adolescents across four schools is a strength of the study. Despite this strength, several limitations should be noted. This is a cross-sectional study, and thus changes in multicultural socialization niches could not be examined. Future work should rely on longitudinal designs and investigate cross-setting changes in the degree and consistency of multicultural socialization and how changes (or maintenance) throughout adolescence may prospectively inform academic functioning. Additionally, the bidirectional nature between cross-setting multicultural socialization and youth's socialization seeking efforts was not captured within the current study. Consistent with prior work highlighting the role of youth agency in cultural socialization processes (Umaña-Taylor et al., 2013), it would be important for future work to examine how youth's efforts to learn about ethnic/racial and cultural heritages other than their own shape youth's socialization niches and their academic-related benefits.
Furthermore, for a more comprehensive understanding of cross-setting multicultural socialization niches and their academic-related associations, future research should rely on multi-reporter assessments (e.g., parent, youth), as well as multi-method approaches such as surveys and observations of multicultural socialization in families, peers, teachers, and schools and the use of school records as additional indicators of academic functioning. Relatedly, the school multicultural socialization scale used in this study assessed the degree to which youth agree with statements about cultural pluralism to be true or not, whereas the friends/peers and the parent/caregiver scale assessed how frequent opportunities to learn about cultural pluralism and equal treatment for members of all groups were available to them. It is possible that the scale and content of the items limited the variability that emerged in school multicultural socialization levels across the identified niches. Thus, future work should measure multicultural socialization efforts relative to cultural pluralism and equal treatment across settings.
Given the nature of the study's sample, the present findings may not generalize to youth from other ethnicracial groups and youth attending schools with differing ethnic-racial compositions across various U.S. regions. Although this study tested whether the youth's ethnic-racial background (i.e., White, Black, Latinx, Multiethnic) predicted the likelihood to be in a specific multicultural socialization niche, youth who identified as Asian American or Pacific Islander, American Indian or Alaska Native, or Arab, Middle Eastern, or North African (n = 27) were omitted from this analysis due to the small sample size. This study did not examine whether the identified niches were comparable (i.e., invariant) across ethnicity/race because of insufficient sample size in each ethnic-racial group to conduct such analyses (Morin et al., 2015). Future work should recruit larger multigroup samples to understand better the links between ethnic-racial backgrounds and multicultural socialization niches. Relatedly, this study tested the role of key social position indicators (i.e., ethnicity-race, gender, parent nativity) on profile membership but due to the analytical approach used, this study could not test the role of these factors on the association between multicultural socialization niches and indicators of academic functioning. This is an important future direction.
While the identification of the niches was justified by the data and surfaced conceptually meaningful groupings, some of these niches or profile sizes were comparatively small, particularly the cross-setting dissimilar greater peer contrast socialization niche and the cross-setting similar higher socialization niche. Identifying these groups is meaningful and important, but caution is warranted in interpreting group comparisons involving these niches. Specifically, the small size of the groups may lead to comparisons with lower statistical power.
It is also possible that the study's data collection period (December -January) had some sway on opportunities to discuss and learn about different ethnic-racial and cultural heritages, given that multiple holidays are celebrated across cultures during that time; data collection at several periods during the year might yield insights in temporal dynamics of multicultural socialization across settings. Finally, possible mediating associations were not tested. For instance, given its aim to teach intercultural competence and understanding, multicultural socialization might promote critical thinking skills closely linked to academic functioning (Tadmor et al., 2009). Future research might formally test this possibility.
Conclusion
Cultural diversity has shaped many parts of the world and has important implications for the development and adjustment of youth from all ethnic and racial groups, particularly for their academic adjustment as youth increasingly attend ethno-racially diverse schools. Nonetheless, there is scarce research on how socialization processes prepare youth to respond to increasing multicultural demands and the degree to which these socialization opportunities inform youth academic functioning. This study addressed this gap by examining multicultural socialization niches across key proximal settings (i.e., schools, peers, and families) and their links with youth academic functioning. Findings from the current study highlight that U.S. adolescents from multiple ethno-racial backgrounds are negotiating a diverse range of multicultural socialization niches that vary in the degree and consistency in socialization experiences across school, peer, and family settings: cross-setting similar higher, moderate, and lower socialization niches and crosssetting dissimilar peer contrast, greater peer contrast, and school contrast socialization niches. Further, the settings comprising these niches work in tandem with one another, and their joint forces inform multicultural socialization goals and associated academic-related benefits. Particularly, adolescents negotiating more cohesive niches with higher degrees of multicultural socialization demonstrated higher behavioral and emotional academic engagement. Conversely, there was partial evidence that adolescents negotiating dissimilar niches with lower degrees of multicultural socialization demonstrated lower academic functioning. In addition, findings from exploratory analyses indicated that social position could shape the multicultural socialization opportunities that youth experience across these settings. Girls and youth with at least one immigrant parent were more likely to negotiate cohesive niches with higher degrees of multicultural socialization compared to their counterparts. Further, Latinx and Multiethnic youth were more likely to negotiate the same niches than White youth, but no differences were observed between Black and White youth. Study findings highlight the transactional nature of youth development and adjustment by providing evidence that social position informs youth's context of development and that contextual diversity in multicultural socialization experiences informs their academic functioning. Importantly, promoting multicultural socialization across school, peer, and family settings is promising for improving youth's academic functioning.
Author contributions M.M.H. conceived of the study, performed the statistical analyses and interpretation of the data, led the writing of the manuscript; M.D.S. led the conceptualization and writing of the introduction and discussion, contributed to the writing of the manuscript and interpretation of findings; O.K. contributed to the conceptualization of the study and interpretation of findings, contributed to the writing of the manuscript; A.A.R. assisted in the conceptualization of the study, contributed to the interpretation of findings and writing of the manuscript; T.H. oversaw implementation and administration of the larger study from which the data are drawn, contributed to the interpretation of findings, conceptualization of the study, and the writing of the manuscript. M.M.H. and M.D.S. are equally-contributing first authors. All authors reviewed and approved the final manuscript.
Data Sharing and Declaration
The data used in this manuscript will not be deposited.
Compliance with ethical standards
Conflict of interest The authors declare no competing interests.
Ethical approval All study recruitment and measurement procedures were approved by the school district and the Arizona State University's Institutional Review Board.
Informed consent Active parental consent and youth assent was obtained from all participants.
Publisher's note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons. org/licenses/by/4.0/.
|
2023-07-28T06:17:27.539Z
|
2023-07-26T00:00:00.000
|
{
"year": 2023,
"sha1": "c97223273b52b87d4407b7d1b66671db83d879ac",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s10964-023-01828-0.pdf",
"oa_status": "HYBRID",
"pdf_src": "Springer",
"pdf_hash": "5b23cf217908e8319e5108bd8ea6d356c7816d88",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
18046364
|
pes2o/s2orc
|
v3-fos-license
|
Notch Signaling Pathway Is Activated in Motoneurons of Spinal Muscular Atrophy
Spinal muscular atrophy (SMA) is a neurodegenerative disease produced by low levels of Survival Motor Neuron (SMN) protein that affects alpha motoneurons in the spinal cord. Notch signaling is a cell-cell communication system well known as a master regulator of neural development, but also with important roles in the adult central nervous system. Aberrant Notch function is associated with several developmental neurological disorders; however, the potential implication of the Notch pathway in SMA pathogenesis has not been studied yet. We report here that SMN deficiency, induced in the astroglioma cell line U87MG after lentiviral transduction with a shSMN construct, was associated with an increase in the expression of the main components of Notch signaling pathway, namely its ligands, Jagged1 and Delta1, the Notch receptor and its active intracellular form (NICD). In the SMNΔ7 mouse model of SMA we also found increased astrocyte processes positive for Jagged1 and Delta1 in intimate contact with lumbar spinal cord motoneurons. In these motoneurons an increased Notch signaling was found, as denoted by increased NICD levels and reduced expression of the proneural gene neurogenin 3, whose transcription is negatively regulated by Notch. Together, these findings may be relevant to understand some pathologic attributes of SMA motoneurons.
Introduction
Spinal muscular atrophy (SMA) is a neurodegenerative disease inherited in an autosomal recessive manner that affects alpha motoneurons in the spinal cord, and causes muscular atrophy of proximal limb and trunk muscles, paralysis, and in the most severe cases, death [1,2]. SMA is caused by the homozygous deletion or specific mutations of the Survival Motor Neuron 1 (SMN1) gene, which results in reduced dosage of full-length SMN protein [3]. The deletion of SMN1 homologs in other animals is lethal at early embryonic ages [4]; however, the human genome contains a variable number of copies of the SMN2 gene that produces about 90% of a highly unstable, truncated protein, called SMNΔ7 due to defective mRNA maturation, and only 10% of normal (full-length) protein [5]. Thus, the severity of the disease depends on the number of copies of SMN2 gene [6].
Notch signaling is a cell-cell communication system well known as a master regulator of neural development [7][8][9]. Four Notch receptors (Notch1-4) and five ligands (Jagged1 and 2; Delta-like1, 3 and 4) have been identified in mammals [10]. Upon ligand binding, a series of cleavage events culminate in the proteolytic cleavage of the transmembrane Notch receptor by the γ-secretase, giving rise to the Notch intracellular domain (NICD), which is translocated into the cell nucleus. Canonical Notch signaling involves the binding of NICD to DNA-binding cofactors and the subsequent activation of the transcription of target genes [11,12]. The most studied Notch targets are the Hairy and Enhancer of Split (Hes) genes. Hes1 negatively controls the expression of a series of proneural genes, including Neurogenin 3 (Ngn3), involved in neuritogenesis [13][14][15]. During the development of the central nervous system in vertebrates, the nascent neurons, by expressing Delta1, deliver lateral inhibition to the Notch1-expressing progenitors in contact with them, so as to prevent these progenitors from differentiating prematurely into neurons and from expressing Delta1 [16]. In addition, Notch expression persists throughout the adult brain in differentiated cells [17,18]. Aberrant Notch function is associated with several developmental neurological disorders and neurodegenerative diseases (reviewed by [19]), and increased expression of Notch has been described in Alzheimer's disease, Pick's disease and Down syndrome [20,21]; however, the potential implication of Notch pathway in SMA pathogenesis has not been studied yet.
Several findings suggest a potential implication of the Notch system in SMA pathology: Notch1 (hereafter referred to as Notch) as well as SMN share common functions in the regulation of neurite outgrowth [13,[22][23][24][25], cell migration [26,27], axon guidance [28,29] and neuromuscular junction maturation [30,31]. In addition, Notch signaling functions in astrocytes as an inducer of characteristic elements for reactive gliosis [32,33], and reactive gliosis has also been described in different types of human SMA [34][35][36] and in the ventral spinal cord of SMA mice [37]. In this sense, we hypothesized that SMN depletion could be related to an increased activation of the Notch signaling pathway in astrocytes. Thus, we first studied in vitro, in the U87MG astroglioma cell line experimentally depleted of SMN, the immunoexpression of Notch, its active intracellular domain (NICD) and its ligands (Jagged1 and Delta1). Then, in an in vivo model of SMA, the SMNΔ7 mouse model, we also studied the expression of Notch ligands in reactive astrocytes in relation with the potential activation of the Notch signaling in the neighboring spinal cord motoneurons.
Increased Notch Signaling in U87MG Astroglioma Cells Depleted of SMN
Four days after lentiviral transduction of U87MG astroglioma cells with an shRNA sequence targeting SMN nearly a 60% reduction in SMN expression levels was found by western blotting, as compared to those transduced with shRNA EV (Figure 1A,B) and as previously described [27]. Then, the expression levels of four participants in the Notch signaling pathway, namely its ligands Jagged1 and Delta1, the Notch receptor and its active intracellular form (NICD) were studied by western blotting. The expression of these proteins was found significantly increased after SMN depletion. The Notch ligands Jagged1 and Delta1 increased their expression around five to six fold. The Notch receptor increased its expression around two fold, whereas the levels of its active form, NICD were found increased around four fold, as compared to shRNA EV ( Figure 1A,B). Moreover, by performing immunocytochemistry in U87MG cells, increased NICD immunoreactivity was found in the nuclei of SMN deficient cells as compared to those transduced with shRNA EV ( Figure 1C).
Together, these results indicated that SMN depletion in an astrocyte cell line was associated to an increased activation of the Notch signaling pathway. We therefore studied in an in vivo model, the SMNΔ7 mouse, if Notch ligands could also be increased on spinal cord astrocytes, as well as the potential effects on neighboring spinal cord motoneurons.
Increased Notch Signaling in Spinal Cord Motoneurons of the SMNΔ7 Mouse
In the SMNΔ7 mouse model of SMA, motor impairment is manifest at postnatal day 11 (P11) [38]. Immunostaining was performed for GFAP to visualize astroglia in the lumbar spinal cord of SMNΔ7 mice at this postnatal age. Quantification of the relative area of GFAP-positive structures within the ventral horn demonstrated a significant increase in this parameter in SMA mutants as compared to WT (Figure 2A-C). In SMA astrocytes the expression of the Notch ligands Jagged1 and Delta1 was found to be significantly increased (281 and 249 percent of increase; respectively) ( Figure 2A,B,D,E). Spinal cord motoneurons were identified by their large bodies (>20 μm) when labeled with blue fluorescent Neuro Trace Nissl staining (Figure 2A,B). Astroglial processes with strong immunoreactivities for Jagged1 and Delta1 were found in intimate contact with motoneurons in SMA (Figure 2A,B, insets). Activation of Notch receptor is induced via cell to cell contact-mediated binding of its ligands; thus, we examined if Notch pathway could be activated in spinal cord motoneurons of SMA mutants. A significant increase (215%) in the immunoreactivity of Notch receptor was observed co-localizing with that of SMI-32 antibody in lumbar spinal cord motoneurons of SMA mutants, as compared to age-matched WT animals ( Figure 3A,C). Moreover, a significant increase (308%) in the immunoreactivity for NICD was found in SMA motoneurons ( Figure 3B,D). Increased levels of NICD were detected in the perikaryon but also in the nucleus of these cells ( Figure 3B, arrows).
The activation of Notch signaling in postmitotic neurons results in the inhibition of the expression of Ngn3 [13]. Thus, to further test Notch signaling activation in SMA motoneurons, Ngn3 levels were studied in these cells. Ngn3 immunoreactivity was found to be located both in the nucleus and perikaryon of spinal cord motoneurons in WT mice ( Figure 4A), by contrast, a significant reduction (54%) of Ngn3 immunoreactivity in motoneurons of SMA mutants was found, with absence of Ngn3 in their nuclei ( Figure 4A,B). Thus, SMA motoneurons show increased NICD levels and reduced Ngn3 expression, confirming the activation of the Notch pathway in these cells.
Discussion
SMN deficiency induced in vitro in the astroglioma cell line U87MG resulted in an increase in the expression of the main components of Notch signaling pathway, as well as increased localization of NICD in cell nuclei. These results prompted us to explore in the SMNΔ7 mouse model of SMA the expression of Notch ligands in reactive astrocytes, and the potential activation of the Notch signaling in the neighboring spinal cord motoneurons.
Our results, indicating astrocytosis in the lumbar spinal cord of SMNΔ7 mice at P11, are in agreement with previous findings in which, in a more severe mouse model of SMA, ventral horn astrocytosis was detectable before and during spinal cord motoneuron death [37]. Interestingly, motoneuron loss has also been reported in the lumbar spinal cord of the SMNΔ7 mice at symptomatic stages [38]; thus, astroglial activation in SMNΔ7 mice may be associated with motoneuron pathology, as previously described in amyotrophic lateral sclerosis [39]. How reactive astrocytes affect motoneuron function in SMA has not been studied yet. Here, we report that reactive astrocytes in SMA display increased expression of the Notch ligands Jagged1 and Delta1. Although in vitro SMN depletion induced increased expression of Notch ligands in U87MG cells, suggesting a potential role of SMN as a repressor of Notch ligands expression in astrocytes, the mechanisms that regulate their expression in vivo in mature astrocytes are not fully understood. In this sense, the expression of Jagged1 is increased in astrocytes under an inflammatory environment [32] and it has been proposed the involvement of inflammatory pathways in SMA [40]. Moreover, the cytokine transforming growth factor β1 (TGFβ1) is directly implicated in the up-regulation of Jagged1 expression in astrocytes [41]; interestingly, disruption of TGFβ signaling is an important molecular event in the pathogenesis of several motoneuron diseases [42]. In addition, Jagged1 has an important role promoting astrocytosis [43] through the induction of GFAP gene expression [32]; thus, it can be proposed that the observed increase in GFAP immunoreactivity in the ventral horn of our SMA mutants may be also a result of the increased expression of Jagged1 in astrocytes.
It has been demonstrated that Jagged1 induces Notch signaling in adjacent cells through a cell-to-cell relay [44]. As astrocyte processes expressing high levels of Notch ligands were observed in intimate contact with spinal cord motoneurons, we addressed if the Notch pathway could be activated in these cells. Immunoreactivity for the active form of Notch (NICD) was found increased both in the nucleus and perikaryon of motoneurons from SMNΔ7 mice, demonstrating an activation of Notch signaling in SMA motoneurons. A similar pattern of NICD distribution has been described in hippocampal neurons in response to activity; in these cells a parallel increase in Notch receptor expression was also reported [45]. In agreement with these findings, we also found an increase in Notch receptor expression in SMA motoneurons. As NICD is a relatively unstable fragment of Notch [46], it has been proposed that positive feedback loops occur in Notch system in which Notch ligands maintain increased Notch receptor expression in cells undergoing Notch signaling [44,47]. Thus, our results suggest that SMN depletion in vitro in an astroglioma cell line, and in vivo in the SMNΔ7 mouse is associated to an increased expression of Notch ligands in astrocytes and that these cells may activate Notch signaling in adjacent motoneurons.
Ngn3 is a protein whose expression is negatively regulated by Notch signaling [13][14][15]. Our results demonstrating reduced Ngn3 expression in SMA motoneurons further indicate that Notch signaling is abnormally active in these cells. In WT motoneurons Ngn3 immunoreactivity was found both in the perikaryon and the cell nucleus. Previous studies have shown that Ngn3 immunoreactivity is also located in the perikaryon and neurites of hippocampal neurons, however, during the differentiation process its immunoreactivity progressively increases in the nucleus [48]. As Ngn3 functions as a transcriptional regulator, our results indicating a total absence of Ngn3 immunoreactivity in the nucleus of SMA motoneurons arise important repercussions in the context of SMA pathogenesis. Ngn3 has been demonstrated to promote neurite outgrowth [13]. In this sense, lack of axonal outgrowth has been described in some spinal cord motoneurons of human SMA [49] and, in murine models, it has been reported an important denervation of some muscles (up to 50% in intercostal muscles), together with a functional deficit and arrest of postnatal development of neuromuscular junctions, showing clusters of unoccupied acetylcholine receptors [31,50]. These findings indicate that impairment/lack of motor nerve terminals in SMA could be related to Ngn3 deficit in spinal cord motoneurons. In PC-12 cells, SMN deficiency also resulted in neuritogenesis impairment in NGF-differentiated cells; however, the effect was related to an up-regulation of the RhoA/ROCK pathway [23]. Also, we reported that in U87MG cells SMN deficiency resulted in impaired cell migration by altering actin cytoskeleton through the activation of RhoA/ROCK [27]. Interestingly, Notch signaling has also been found to activate this pathway to regulate actin dynamics [51] and thus, to inhibit neurite extension [52]. In addition to an impaired neuritogenesis, increased Notch activation in spinal cord motoneurons may predispose them to apoptosis, as demonstrated in cortical neurons in response to ischemic stroke [53].
Besides to its roles in neurons and astroglia, Notch has profound functions in other cell types in the brain, including microglia [54], oligodendrocytes [55] and endothelial cells [56]; thus, further studies are needed towards understanding the role of Notch system in SMA.
U87MG Cell Culture and Transduction
U87MG human astroglioma cells were a gift from Dr. Priam Villalonga (IUNICS, University of the Balearic Islands, Palma de Mallorca, Spain). U87MG cells were subconfluently grown and passaged, routinely tested for mycoplasma contamination and subjected to frequent morphological tests and growth curve analysis as quality-control assessments. U87MG cells were grown in Dulbecco's Modified Eagle's Medium (DMEM) supplemented with 2 mM L-glutamine and 5% heat-inactivated fetal calf serum in a humidified incubator at 37 °C with 5% CO 2 .
To reduce SMN expression, RNA interference experiments were employed using lentiviral particles, as previously described [27]. Briefly, constructs were generated in pSUPER.retro.puro (OligoEngine; Seattle, WA, USA) using specific oligonucleotides targeting SMN sequence, indicated by capital letters and as previously described [57]; forward: gatccccCGACCTGTGAAGTAGCTAAttcaagagaTTAGCTACTTCACAGGTCGttttt and reverse: agctaaaaaCGACCTGTGAAGTAGCTAAtctcttgaaTTAGCTACTTCACAGGTCGggg. For lentiviral transduction, cells were plated at a density of 25,000 cells/mL in 6-well plates and 3 h later medium was replaced with medium containing lentiviruses (2 TU/cell) carrying shRNA empty vector (EV) or shSMN. The medium was replaced with fresh medium 24 h later and infection efficiency was monitored in each experiment by direct counting Green Fluorescent Protein (GFP)-positive cells. Cells were grown for 4 additional days before sample collection for western blotting or immunocytochemistry assays.
Western Blotting
U87MG cells were rinsed in ice-cold PBS and lysed with 50 mM Tris HCl, pH 6.8, 150 mM NaCl, 1 mM EDTA, 1% Triton X-100 containing a cocktail of protease inhibitors (Complete Mini; Roche Pharmaceutical, Basel, Switzerland). Lysates were sonicated and proteins quantified by means of the DC Protein Assay from Bio-Rad Laboratories (Hercules, CA, USA). Protein equivalents from each sample were resolved in SDS-polyacrilamide gel electrophoresis and electrotransferred to 0.45 µm nitrocellulose membranes (Amersham; Buckinghamshire, UK) using a Bio-Rad semidry trans-blot, according to the manufacturer's instructions. Membranes were blocked at 21 ± 1 °C for 1 h with PBS containing non-fat dry milk, 0.5% bovine serum albumin (BSA) and 0.2% Tween 20. Membranes were probed overnight at 4 °C using antibodies directed to SMN (1:5000) from BD Biosciences (Franklin Lakes, NJ, USA); Jagged1 (1:1000) and Delta1 (1:500) both from Santa Cruz Biotechnology (Santa Cruz, CA, USA); Notch and NICD (1:1000) both from Cell Signaling Technology (Danvers, MA, USA) and α-tubulin, (1:5000) from Sigma-Aldrich (St. Louis, MO, USA). Membranes were then washed with PBS and incubated for 2 h with the appropriate peroxidase-conjugated secondary antibody. Blots were developed with the chemiluminescent peroxidase substrate and visualized in chemiluminescence film (Amersham). The apparent molecular weight of proteins was determined by calibrating the blots with pre-stained molecular weight markers (Bio-Rad).
Mouse Model
Mouse lines were kindly provided by Dr. A. Burghes (The Ohio State University, Columbus, OH, USA). Experimental mice were obtained by breeding pairs of SMA carrier mice (Smn +/− ; SMN2 +/+ ; SMN∆7 +/+ ) on a FVB/N background. Identification of wild-type (WT) (Smn +/+ ; SMN2; SMN∆7) and mutant SMA mice (Smn −/− ; SMN2; SMN∆7) was done by PCR genotyping of tail DNA as previously described [31,38]. WT and mutants mice were always used at P11. All experiments were performed according to the guidelines of the European Communities Council Directive for the Care of Laboratory Animals.
Immunofluorescence and Nissl Staining
Mice were anesthetized with 2% tribromoethanol (0.15 mL/10 g body weight, i.p.) and intracardiacally perfused with saline solution followed by 4% paraformaldehyde in 0.1 M phosphate buffer, pH 7.4. Spinal cords were post-fixed by immersion in the same fixative solution for 1-24 h. Then, transverse serial cryostat sections (10 μm thick) from lumbar segments were obtained with a Leica cryostat (Leica CM3050) and mounted on microscope slides. Sections were quenched with 3% H 2 O 2 in phosphate buffer saline (PBS) and permeabilized with methanol for 5 min. Then, sections were blocked with 5% normal goat serum and 0.2% Triton X-100 in PBS for 1 h. Sections were incubated overnight at 4 °C with the primary antibody diluted in blocking solution. The following primary antibodies were used for immunofluorescence: anti-glial fibrillary acidic protein (GFAP) . The SMI-32 antibody was used to specifically label spinal cord motoneurons as previously described [58].
For immunofluorescence, sections were incubated for 1 h with the appropriate secondary antibody, Alexa Fluor 555 goat anti-mouse IgG (1:200) or Alexa Fluor 488 goat anti-rabbit IgG (1:200) (Invitrogen, Carlsbad, CA, USA). Sections were then washed and mounted using Fluorescent Mounting Medium (Dako Cytomation). Immunohistochemical controls, performed by omitting the primary antibody, resulted in the abolition of the immunostaining. In some cases, spinal cord sections were also labeled with blue fluorescent Neuro Trace Nissl staining (Molecular Probes, Eugene, OR, USA).
Image Acquisition and Analysis
Images were acquired digitally using a 20× or 40× oil immersion objective with a Leica TCS SP2 confocal laser-scanning microscope. Images from WT and SMA mutant littermate preparations were taken with similar conditions (laser intensities and photomultiplier voltages). A minimum of ten lumbar spinal cord sections were studied for animal and experimental situation, with at least four mice for each experimental condition. In order to quantify GFAP immunoreactivity four selected fields of ventral spinal cord were digitized and the Mean Gray Value for GFAP immunoreactivity was measured in a blinded manner and corrected by the value of the area of the field to obtain the relative GFAP positive area. In order to quantify Jagged1, Delta1, Notch, NICD or Ngn3 immunoreactivities, four selected fields of ventral spinal cords containing cells labeled with anti-GFAP or SMI32 antibodies were digitized and the Mean Gray Value for Jagged1 or Delta1 immunoreactivity in GFAP-positive cells (astrocytes) as well as Notch, NICD or Ngn3 immunoreactivity in SMI32-positive cells (motoneurons) were measured in a blinded manner with ImageJ (W. Rasband, National Institutes of Health, Bethesda, MD; http://rsb.info.nih.gov/ij/). The values were background subtracted using the average Mean Gray Value of the preparation background in each of the experimental conditions and data were always represented as a percentage of the values in WT mice.
Statistical Analysis
All data are expressed as mean ± SEM values. Statistical significance was assessed by Student's t-test. Differences were considered significant when the p value was less than 0.05.
Conclusions
In summary, our results demonstrate that, both in vitro and in vivo, SMN deficiency results in increased expression of Notch ligands on astrocytes and that lumbar spinal cord motoneurons of SMNΔ7 mice, adjacent to reactive astrocytes, display increased Notch signaling, as denoted by increased NICD levels and reduced expression of the proneural gene Ngn3. These findings may be relevant to understand some pathologic attributes of SMA motoneurons.
|
2014-10-01T00:00:00.000Z
|
2013-05-29T00:00:00.000
|
{
"year": 2013,
"sha1": "3fe09e5fedded6bfa2fb88bec65e0ef63e3ba63e",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1422-0067/14/6/11424/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "3fe09e5fedded6bfa2fb88bec65e0ef63e3ba63e",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
238026294
|
pes2o/s2orc
|
v3-fos-license
|
Research on the County Water Resources Carrying Capacity in the New Period
. With the rapid economic and social development, human disturbances to the ecosystem have become more and more intense. However, the scale of regional economic and social development, supported by regional water resources, has boundaries. The central government and local governments of all levels have clearly proposed to carry out water resources carrying capacity evaluation and early warning. From the perspective of the actual management of the county water administrative department, with the administrative divisions within the county as the basic unit, the principle of index selection and index grading standards determination is proposed, and an index system including the four major levels of society, economy, water resources, and ecology is constructed. Then, combined with the functions of various government departments, the basic framework of a universal water resources carrying capacity early warning mechanism, including three major processes: carrying capacity evaluation and update, early warning information release, and implementation of differentiated control measures, is proposed.
Introduction
For human survival and development, water resources, one of the most important material resources, are indispensable, and is the basic support for economic development and related to economic security, ecological security and national security. Since the economic reform and open up, due to rapid population agglomeration, further acceleration of urbanization, rapid economic and social development, regional water demand and water resources development and utilization have increased significantly, and human disturbances to the ecosystem have become more and more intense. However, the renewable capacity and renewal rate of regional water resources are limited, and the endowment of water resources is quite different. Therefore, the scale of regional economic and social development that regional water resources can support is bounded, and the concept of water resources carrying capacity is thus put forward [1] .
The carrying capacity of water resources is to introduce the concept of carrying capacity into the field of water resources, and carry out carrying capacity research with water as a resource. Under this concept, the water resources system is the main body of the carrier, human beings and the economic and social systems and ecosystems on which they depend are the carrier objects, and maintaining the virtuous cycle of the ecosystem is its control objective of the research on the carrying capacity of water resources. The carrying capacity of water resources involves many factors such as economy, social, water resources, ecology, and requires comprehensive research in a composite system [2] . General Secretary Xi Jinping pointed out that water has become a severely short product in China, and proposed that the county should be used as a unit to evaluate the carrying capacity of resources and the environment and establish an early warning mechanism. In January 2020, the General Office of the Ministry of Natural Resources clearly proposed to carry out the evaluation of water resources carrying capacity and incorporate it into the resource and environment carrying capacity system. Subsequently, the central government and local governments of all levels clearly stated in the water-saving action plan that water resources carrying capacity evaluation, monitoring and early warning should be carried out.
At present, the research on the carrying capacity of water resources in foreign countries mostly appears in the theory of sustainable development, and the main research directions are the sustainable use of water, indicators of water scarcity, the limit of water resources development based on the health of the natural environment, and the utilization limits of river development and etc [3,4] . Through the application of relevant theories and methods to evaluate the degree of regional water resources carrying capacity, the sustainable utilization of regional water resources is analysed. Based on the evaluation results, the government formulates corresponding policies and measures in urban water supply, water resources utilization, and industrial development. Domestic research on water resources carrying capacity started relatively late, but certain research results have been made in recent years. Especially in the concept, connotation, theoretical system, calculation method, evaluation index system, evaluation model of water resources carrying capacity, domestic scholars have done systematic research. Zuo QT [2] divided the domestic water resources carrying capacity research methods into three categories: empirical formula method, comprehensive evaluation method and system analysis method, and proposed that the focus of future water resources carrying capacity research should be basic models, internal mechanisms, water control results, calculations model, system platform, dynamic evaluation, early warning and control, and etc. Xiu HL et al. [5] summarized the experience of water resources carrying capacity control measures in typical regions, and combed the main measures for the two-way control of domestic water resources carrying capacity from the two directions of strong loading and unloading. On the basis of constructing a regional characteristic index system for a specific research area, Yu P et al. [6] , Jiang Y et al. [7] , Fan CY et al. [8] , Fan YH et al. [9] respectively adopted aquatic ecological footprint method, principal component analysis method, artificial neural network method and coupling between methods to evaluate the regional water resources carrying capacity.
Overall, the current domestic research is mostly limited to the definition of the concept of water resources carrying capacity, the diversification of evaluation methods, and the analysis of the current regional water resources carrying capacity. Researchers pay more attention to quantitative research on results, and the applicability of the index system is relatively limited. At the same time, there are still few studies on the periodic dynamic update of water resources carrying capacity and the establishment of early warning management and control mechanisms. In the new era, China's economy has shifted from a stage of high-speed growth to a stage of high-quality development. Therefore, how to implement the rigid constraints of water resources through the establishment of water resources carrying capacity monitoring and early warning mechanisms at the county level is an urgent problem to be solved. From the perspective of the management practice of the county water administrative department, this research focuses on the actual water resources management of the county, and proposes to construct the evaluation index system and evaluation method of the regional water resources carrying capacity with the administrative divisions within the county as the basic unit. Then, the regional water resources carrying capacity evaluation model is combined with the digital results of county water resources management and the functions of various government departments to further design the water resources carrying capacity early warning mechanism process framework, and the corresponding management and control measures are studied and proposed in this study.
2 Construction of evaluation system for water resources carrying capacity 2.1 Construction of water resources carrying capacity index system
Principles of the index system
Representativeness. The indicators should have a certain hierarchies of typical representativeness. There are many factors involved in the carrying capacity of water resources. The indicators with strong comprehensiveness and characteristics should be selected to reflect the status of the hierarchies represented as much as possible.
Conciseness. The calculation and measurement of indicators must be simple and clear, and relevant data must be easy to collect. Meanwhile, in order to facilitate the periodic dynamic update of the bearing capacity strength evaluation, the calculation method must be simple and fast.
Regionality. The carrying capacity of different regions is quite different in space. The index system for the carrying capacity of the county should include indicators with regional characteristics.
Index system
The evaluation of water resources carrying capacity, influenced and restricted by economy, society, ecological environment, etc., is a relatively complex system. The construction of an index system is the basis for carrying capacity evaluation. Based on the regional resources conditions, the load intensity of the current economic and social development on water resources is the actual focus of the county. Therefore, a total of eight indicators from four aspects dominated by relative indicators are determined to constitute an indicator system, as shown in the following table.
Economic
Water consumption for nonagricultural GDP per ten thousand RMB (m 3 /ten thousand RMB) Non-agricultural water consumption/non-agricultural GDP per ten thousand RMB -Water consumption per ten thousand RMB of industrial added value (m 3 /ten thousand RMB) Water consumption/ per ten thousand RMB industrial added value -
Ecology
Water quality compliance rate of water function zone (%) Qualified quantity/total quantity + Vegetation coverage rate (%) Area covered by vegetation/total area of the area + Note: + represents positive correlation; -represents negative correlation.
Principles for determining the grading standard of evaluation factors
The classification of grades is the basis for determining the strength of water resources carrying capacity, but there are still few unified standards for its evaluation. This research considers general issues and proposes the following principles for determining the classification standards.
Orderliness. For existing standards, the grading standards should follow the priority order of international, national, provincial, and municipal levels and adopt the corresponding standards in turn.
Dynamic. With the development of social economy and technological progress, the grading standards should be changed accordingly, and the grading standards should be updated regularly.
Scientificality. For indicators without recognized standards, scientific methods such as expert experience method, total score frequency method or equal interval method should be sought for quantification.
Characteristic. There are existing research results that can be referred to for some indicators. On the basis of considering the actual use of water resources in the county, the grading standards of these indicators should be determined based on the spatial differences between regions.
Overview of BP neural network model
Error Back-Propagation (Error Back-Propagation) is abbreviated as BP algorithm, which aims to minimize the error of the output result and is currently the most widely used neural network model [10] . It is a one-way multi-layer forward network. The model structure includes an input layer, an output layer, and a hidden layer. The nodes of two adjacent layers are connected by weights, and the threshold that controls the critical value of the node's response. In the specific construction process, in order to solve the problem that the direct analysis and calculation of data of different natures cannot correctly reflect the comprehensive results of different forces and the comparability of data, and to eliminate the influence of different dimensions and magnitude differences of various indicators, data standardization is required. As the input data, the expected result of the index is used as the output layer, and the input layer data is forwarded to the hidden layer. After the activation function of this layer is processed, it is transmitted to the output layer. The error between the results is propagated back to correct the weight of each layer, and the complex internal correspondence between input and output is summarized through continuous learning, and the final BP neural network model is constructed. The topological structure of BP neural network is shown in the figure.
Overview of Particle Swarm Optimization (PSO) algorithm
The basic idea of PSO is derived from the study of the foraging behaviour of birds, that is, a group of birds is searching for food at random, and there is only one piece of food in this area. All the birds do not know where the food is, but they know the current location is far away. The distance of food, the simplest and most effective strategy to find food is to search the area around the bird that is currently closest to the food. In the PSO algorithm, the potential solution of each optimization problem can be imagined as a point on the d-dimensional search space, which is a "particle". All particles have an fitness value determined by the objective function, and each particle has A speed determines the direction and distance they fly, and then the particles follow the current optimal particle to search in the solution space, and can perform complex global behaviours through simple and regular interactions. The PSO algorithm is easy to implement, has high solving efficiency, and has strong nonlinear optimization performance. It is a popular algorithm for research and application in the field of optimization.
PSO-BP neural network model construction
BP neural network has good performance in self-learning and non-linear mapping, but its convergence speed is slow, and it is very sensitive to initial weights and thresholds, while the BP algorithm connects the initial weights between BP neural network nodes. And the random number with the threshold value in the interval (0, 1) has great uncertainty, and the model can easily fall into a local optimal situation, which leads to situations such as unstable prediction results and high and sometimes low prediction accuracy [11] . The PSO algorithm has simple coding, fewer parameters, and fast search speed. Combining PSO with BP neural network model, PSO algorithm is used to quickly find the initial weights and thresholds of BP neural network model, and the stability and operation efficiency of neural network model are improved. The specific process is shown in the figure below. 3 Framework of early warning mechanism for water resources carrying capacity
General idea
Based on the county water management platform, the township (street) is the basic control unit, and the responsibilities of various departments are integrated, and the basic framework of the early warning mechanism of water resources carrying capacity has been constructed, including three modules: evaluation and update of carrying capacity, release of early warning information, and implementation of differentiated control measures.
The overall framework of the early warning mechanism
According to the responsibilities of governments and departments at all levels, this study proposes the specific water resources carrying capacity early warning mechanism process as follows: ①The water administrative department is responsible for proposing the index system, grading standards and calculation methods of water resources carrying capacity, incorporating them into the county water management platform, and coupling relevant monitoring data, to carry out automatic evaluation of water resources carrying capacity. At the same time, the water administrative department is responsible for formulating the update frequency of the carrying capacity index system and grading standards.
② Based on the evaluation, the county water administrative department will issue early warning information to the public, county-level government departments, and township (street) government and people's governments in various ways.
③ After receiving the warning notice, the public should take the initiative to save water; The township (sub-district) people's government should vigorously carry out water conservation publicity, promote water conservation in various industries in the region, and reduce the load of water resources in the region by adjusting the industrial structure and strict enterprise entry standards; Relevant departments of the county-level government implement management and control measures in accordance with their own responsibilities by increasing the carrying capacity within the domain and reducing the carrying load.
Conclusion
Aiming at the actual needs of county water resources management, this article proposes that the indicator system should be representative, concise, and regional in the construction process of the evaluation of county water resources carrying capacity. In addition, the grading standards of water resources carrying capacity evaluation indicators need to be orderly, dynamic, scientific, and characteristic in the process of determining. Furthermore, a set of carrying capacity index system including four criterion levels of water resources, society, economy and ecology is proposed. Based on the above research, combined with the specific responsibilities of county governments at all levels and relevant departments, a general-purpose water resources carrying capacity early warning mechanism framework has been constructed that includes three major processes: evaluation and update of water resources carrying capacity, early warning information release, and implementation of differentiated control measures. Also, this framework clarifies the responsible body of each process.
|
2021-08-27T17:02:53.651Z
|
2021-01-01T00:00:00.000
|
{
"year": 2021,
"sha1": "4656ba21df538647b95641752ba44b28f32f450e",
"oa_license": "CCBY",
"oa_url": "https://www.e3s-conferences.org/articles/e3sconf/pdf/2021/52/e3sconf_wchbe2021_01009.pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "80fcc43842c1bbc2075e5dacaf702c67902c2faa",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Business"
]
}
|
118499149
|
pes2o/s2orc
|
v3-fos-license
|
Search for disappearing tracks in proton-proton collisions at sqrt(s) = 8 TeV
A search is presented for long-lived charged particles that decay within the CMS detector and produce the signature of a disappearing track. Disappearing tracks are identified as those with little or no associated calorimeter energy deposits and with missing hits in the outer layers of the tracker. The search uses proton-proton collision data recorded at sqrt(s) = 8 TeV that corresponds to an integrated luminosity of 19.5 inverse femtobarns. The results of the search are interpreted in the context of the anomaly-mediated supersymmetry breaking (AMSB) model. The number of observed events is in agreement with the background expectation, and limits are set on the cross section of direct electroweak chargino production in terms of the chargino mass and mean proper lifetime. At 95% confidence level, AMSB models with a chargino mass less than 260 GeV, corresponding to a mean proper lifetime of 0.2 ns, are excluded.
Introduction
Many beyond-the-standard-model (BSM) scenarios introduce long-lived charged particles with mean decay lengths of the order of the size of the tracking detectors used by the CERN LHC experiments. If the decay products of such a particle are undetected, either because they have too little momentum to be reconstructed or because they interact only weakly, a "disappearing track" signature is produced. This signature is identified as an isolated particle track that extends from the interaction region but that, after the point of disappearance, leaves no hits in the muon or tracking detectors and has little energy deposited in the calorimeter cells in the region around the trajectory extrapolated to the inner radius of the calorimeter. Because standard model (SM) processes rarely produce this signature, background processes are almost entirely composed of failures of the particle reconstruction or track finding algorithms.
The disappearing track signature arises in a broad range of BSM scenarios [1][2][3][4][5][6][7][8][9][10][11][12][13]. For example, in anomaly-mediated supersymmetry breaking (AMSB) [14,15] the particle mass spectrum includes a chargino and neutralino (electroweakinos χ ± 1 and χ 0 1 , respectively) that are nearly degenerate in mass. The chargino-neutralino mass difference is of order 100 MeV such that the chargino is long-lived and can reach the CMS tracking detector before decaying to a neutralino and a pion ( χ ± 1 → χ 0 1 π ± ). The neutralino is the lightest supersymmetric particle (LSP), and so is stable because of R-parity conservation. The pion from this decay does not have sufficient momentum to be reconstructed as a track or to contribute significantly to the energy associated with the chargino track. Consequently, the decay of an AMSB chargino to a weakly interacting neutralino and an unreconstructed pion would produce the disappearing track signature.
This letter presents a search for disappearing tracks in proton-proton (pp) collision data collected at √ s = 13 TeV throughout 2017 and 2018, corresponding to an integrated luminosity of 101 fb −1 . The results of this search are presented in terms of chargino masses and lifetimes within the context of AMSB. The results are also presented more generally in a form that can be used to test any BSM scenario producing the disappearing track signature. The ATLAS experiment has previously excluded AMSB, with a purely wino LSP, for chargino masses below 460 GeV with a lifetime of 0.2 ns [16]. The CMS experiment has excluded AMSB chargino masses for a purely wino LSP below 715 GeV for a lifetime of 3 ns [17], using the data collected during 2015 and 2016. This search extends the previous CMS results to encompass the entire available √ s = 13 TeV data set, referred to as the Run 2 data set, corresponding to a total integrated luminosity of 140 fb −1 . Prior to the 2017 data-taking period, a new pixel detector was installed as part of the Phase 1 upgrade [18,19]. This new detector contains a fourth inner layer at a radius of 2.9 cm from the interaction region. The addition of this new layer enables this search to accept shorter tracks that traverse fewer layers of the tracker, thereby increasing its sensitivity to shorter lifetime particles. The interpretation of the results is extended to include the direct electroweak production of charginos in the case of a purely higgsino LSP.
The CMS detector
The central feature of the CMS apparatus is a superconducting solenoid of 6 m internal diameter. Within the solenoid volume are a silicon pixel and strip tracker, a lead tungstate crystal electromagnetic calorimeter (ECAL), and a brass and scintillator hadron calorimeter (HCAL), each composed of a barrel and two endcap sections. Forward calorimeters extend the pseudorapidity coverage provided by the barrel and endcap detectors. Muons are measured in gas-ionization detectors embedded in the steel flux-return yoke outside the solenoid.
The silicon tracker measures charged particles within the pseudorapidity range |η| < 2.5. Dur-Simulated signal events are generated at leading order (LO) precision with PYTHIA 8.240 [24], using the NNPDF3.0 LO [25] parton distribution function (PDF) set with the CP5 tune [26] to describe the underlying event. Supersymmetric particle mass spectra are produced by ISAJET 7.70 [27], for chargino masses in the range 100-1100 (100-900) GeV in steps of 100 GeV for the wino (higgsino) LSP case. The ratio of the vacuum expectation values of the two Higgs doublets (tan β) is fixed to 5, with a positive higgsino mass parameter (µ > 0). The χ ± 1χ 0 1 mass difference has little dependence on tan β and the sign of µ [28]. While this mass difference typically determines the chargino's proper decay time (the lifetime in the rest frame, τ), in these simulated signal events τ is explicitly varied from 6.67 ps to 333 ns (corresponding to a range in cτ of 0.2-10 000 cm) in logarithmic steps, to examine sensitivity to a broader range of models.
In the wino LSP case, the chargino branching fraction (B) for χ ± 1 → χ 0 1 π ± is set to 100%, and both χ ± 1 χ ∓ 1 and χ ± 1 χ 0 1 production processes are simulated. In the higgsino LSP case, the second neutralino ( χ 0 2 ) is completely degenerate in mass with χ 0 1 , having equal production cross sections (σ) and branching fractions for the χ ± 1 → χ 0 1,2 + X decays. Following Ref. [29], these are taken to be 95.5% for χ ± 1 → χ 0 1,2 π ± , 3% for χ ± 1 → χ 0 1,2 eν, and 1.5% for χ ± 1 → χ 0 1,2 µν in the range of chargino masses of interest, and both χ ± 1 χ ∓ 1 and χ ± 1 χ 0 1,2 production processes are simulated. Simulated signal events are normalized using cross sections calculated to next-to-leading order plus next-to-leading-logarithmic (NLO+NLL) precision, using RESUMMINO 1.0.9 [30,31] with the CTEQ6.6 [32] and MSTW2008nlo90cl [33] PDF sets, and the final numbers are calculated using the PDF4LHC recommendations [34] for the two sets of cross sections. In the wino case, the ratio of χ ± 1 χ 0 1 to χ ± 1 χ ∓ 1 production is roughly 2:1 for all chargino masses considered. In the higgsino case, the ratio of χ ± 1 χ 0 1,2 to χ ± 1 χ ∓ 1 production is roughly 7:2. Because PYTHIA is an LO generator, it is known to be deficient in modeling the rate of initialstate radiation (ISR) and the resulting hadronic recoil [35,36]. Data-derived corrections for this deficiency are applied as functions of the transverse momentum (p T ) of the electroweakino pair (either χ ± 1 χ ∓ 1 or χ ± 1 χ 0 1,2 ). It is assumed that the production of ISR in Z boson and electroweakino pair events are similar, since both are electroweak processes, and the correction factors are derived as the ratio of the p T of Z→µµ candidates in data to simulated events, comparable to the method used in Ref. [36]. The ISR correction factors typically range between 1.8 and 2.0 in the kinematic region relevant for this search.
Simulated events are generated with a Monte Carlo program incorporating a full model of the CMS detector, based on GEANT4 [37], and reconstructed with the same software used for collision data. Simulated minimum bias events are superimposed on the hard interaction to describe the effect of additional inelastic pp interactions within the same or neighboring bunch crossings, known as pileup, and the samples are weighted to match the pileup distribution observed in data.
Event reconstruction and selection
A particle-flow (PF) algorithm [38] aims to reconstruct and identify each individual particle in an event with an optimized combination of information from the various elements of the CMS detector. The energy of photons is obtained from the ECAL measurement. The energy of electrons is determined from a combination of the electron momentum at the primary interaction vertex as determined by the tracker, the energy of the corresponding ECAL cluster, and the energy sum of all bremsstrahlung photons spatially compatible with originating from the electron track. The energy of muons is obtained from the curvature of the corresponding track. The energy of charged hadrons is determined from a combination of their momentum measured in the tracker and the matching ECAL and HCAL energy deposits, corrected for the response function of the calorimeters to hadronic showers. Finally, the energy of neutral hadrons is obtained from the corresponding corrected ECAL and HCAL energies.
For each event, hadronic jets are clustered from these reconstructed particles using the infraredand collinear-safe anti-k T algorithm [39,40] with a distance parameter of 0.4. Jet momentum is determined as the vector sum of all particle momenta in the jet, and is found from simulation to be, on average, within 5 to 10% of the true momentum over the entire p T spectrum and detector acceptance. Hadronic τ lepton decays are reconstructed with the hadron-plus-strips algorithm [41], which starts from the reconstructed jets.
The missing transverse momentum vector p miss T is computed as the negative vector sum of the transverse momenta of all the PF candidates in an event [42], and its magnitude is denoted as As tracking information is not available in the L1 trigger, events are collected by several triggers requiring large p miss T or p miss, µ / T , which would be produced in signal events by an ISR jet recoiling against the electroweakino pair. The L1 triggers require p miss T above a threshold that was varied during the data-taking period according to the instantaneous luminosity. The HLT requires both p miss T and p miss, µ / T with a range of thresholds. The lowest threshold trigger, designed specially for this search, requires p miss T > 105 GeV and an isolated track with p T > 50 GeV and at least 5 associated tracker hits at the HLT. The remaining triggers require p miss T or p miss, µ / T > 120 GeV and do not have a track requirement.
After the trigger, events selected offline are required to be consistent with the topology of an ISR jet at the HLT, having p miss, µ / T > 120 GeV, and at least one jet with p T > 110 GeV and |η| < 2.4. To reject events with spurious p miss T from mismeasured jets, the difference in the azimuthal angle φ between the direction of the highest p T jet and p miss T is required to be greater than 0.5 radians. For events with at least two jets, the maximum difference in φ between any two jets, ∆φ max , is required to be less than 2.5 radians. In 2018, a 40 • section of one end of the hadronic endcap calorimeter (HEM) lost power during the data-taking period. The 2018 data are therefore separated into two samples, 2018 A and B, corresponding to events before and after this loss of power, with integrated luminosities of 21 and 39 fb −1 , respectively. Events from the 2018 B period are rejected if the p miss T points to the affected region, having −1.6 < φ( p miss T ) < −0.6. This requirement, referred to as the "HEM veto", removes 31% of events in 2018 B, and leads to a reduction in signal acceptance of 16% for this data-taking period, as expected from geometrical considerations and as verified in simulation. The selection requirements applied to this point define the "basic selection", with the resulting sample dominated by W→ ν events.
After the basic selection, isolated tracks with p T > 55 GeV and |η| < 2.1 are further selected, where the isolation requirement is defined such that the scalar sum of the p T of all other tracks within ∆R = √ (∆η) 2 + (∆φ) 2 < 0.3 of the candidate track must be less than 5% of the candidate track's p T . Tracks must be separated from jets having p T > 30 GeV by ∆R(track, jet) > 0.5. Tracks are also required to be associated with the primary pp interaction vertex (PV), the candidate vertex with the largest value of summed physics-object p 2 T . The physics objects in this sum are the jets, clustered with the tracks assigned to candidate vertices as inputs, and the associated missing transverse momentum, taken as the negative vector sum of the p T of those jets. With respect to the PV, candidate tracks must have a transverse impact parameter (|d 0 |) less than 0.02 cm and a longitudinal impact parameter (|d z |) less than 0.50 cm.
Tracks are said to have a missing hit if they are reconstructed as passing through a functional tracker layer, but no hit in that layer is associated with the track. A missing hit is described as "inner" if the missing layer is between the interaction point and the track's innermost hit, "middle" if between the track's innermost and outermost hits, and "outer" if it is beyond the track's outermost hit. The track reconstruction algorithm generally allows for some missing hits, to improve efficiency for tracks traversing the entire tracker. However, for shorter tracks this may result in spurious reconstructed tracks, arising not from charged particle trajectories but from pattern recognition errors. These spurious tracks are one of two sources of backgrounds considered in this search. This background is reduced by requiring tracks to have no missing inner or middle hits, and at least four hits in the pixel detector.
The other source of background is isolated, high-p T charged leptons from SM decays of W ± or Z bosons, or from virtual photons. These tracks can seem to disappear if the track reconstruction fails to find all of the associated hits. Missing outer hits in lepton tracks may occur because of highly energetic bremsstrahlung in the case of electrons, or nuclear interactions with the tracker material in the case of hadronically decaying τ leptons (τ h ). Electrons or τ h may be associated with little energy deposited in the calorimeters because of nonfunctional or noisy calorimeter channels. To mitigate this background, tracks are rejected if they are within ∆R(track, lepton) < 0.15 of any reconstructed lepton candidate, whether electron, muon, or τ h . This requirement is referred to as the "reconstructed lepton veto". To avoid regions of the detector known to have lower efficiency for lepton reconstruction, fiducial criteria are applied to the track selection. In the muon system, tracks within regions of incomplete detector coverage, i.e., within 0.15 < |η| < 0.35 and 1.55 < |η| < 1.85, are rejected. In the ECAL, tracks in the transition region between the barrel and endcap sections at 1.42 < |η| < 1.65 are rejected, as are tracks whose projected entrance into the calorimeter is within ∆R < 0.05 of a nonfunctional or noisy channel. Because two layers of the pixel tracker were not fully functional in certain data-taking periods, some regions exhibited low efficiency for the requirement of four or more pixel hits, and tracks within these regions are rejected. These regions correspond to the range 2.7 < φ < π for the region 0 < η < 1.42 in the 2017 data set, and to the range 0.4 < φ < 0.8 for the same η region in the 2018 data set. Application of this final requirement rejects approximately 20% of simulated signal tracks.
Additional regions of lower lepton reconstruction efficiency are identified using "tag-andprobe" (T&P) studies [43]. Candidate Z→ objects are selected in data where the invariant mass of a tag lepton and a probe track is within 10 GeV of m Z , the world-average mass of the Z boson [44], resulting in a sample of tracks having a high probability of being a lepton without explicitly requiring that they pass the lepton reconstruction. The efficiency of the lepton reconstruction is calculated using these probe tracks across the full coverage of the detector, and also for each local η-φ region of size 0.1×0.1. Candidate tracks are rejected from the search region if they are within an η-φ region in which the local efficiency is less than the overall mean efficiency by at least two standard deviations. This procedure removes an additional 4% of simulated signal tracks.
Finally, two criteria define the condition by which a track is considered to have "disappeared": (1) the track must have at least three missing outer hits, and (2) the sum of all associated calorimeter energy within ∆R < 0.5 of the track (E ∆R<0.5 calo ) must be less than 10 GeV. From the sample of tracks passing all of the requirements described above, three signal categories are defined depending on the number of tracker layers that have hits associated to the track, n lay : n lay = 4, n lay = 5, and n lay ≥ 6. At η = 0 these categories correspond, respectively, to track lengths of approximately 20, 20-30, and >30 cm. The previous CMS search for disappearing tracks [17] required at least seven hits associated with the selected tracks, which resulted in a sensitivity comparable to that of only the n lay ≥ 6 category in this search.
Charged leptons
For tracks from charged, high-p T leptons (electrons, muons, or τ h ) to be selected by the search criteria, the lepton reconstruction must fail in such a way that a track is still observed but no lepton candidate is produced, resulting in a mismeasurement of the calorimeter energy in the event. For this reconstruction failure to occur, four conditions must be satisfied: • The lepton's track is reconstructed, but no candidate lepton is identified near to it, and E ∆R<0.5 calo is less than 10 GeV.
• The resulting p miss, µ / T must be large enough to pass the offline p miss, µ / T requirement.
• The resulting p miss T and p miss, µ / T must be large enough to pass to trigger requirement.
• In the 2018 B data-taking period, the resulting p miss T must pass the HEM veto.
The background from charged leptons is estimated by calculating the conditional probability of each of these four requirements in the given order, as described below, treating each lepton flavor independently in each of the three signal categories.
P veto
The probability of satisfying the first condition, P veto , is defined as the probability for a lepton candidate to fail to be identified as a lepton. This is estimated for electrons (muons) using a T&P study with Z→ee (Z→µµ) candidates. Events are selected that satisfy a single-electron (singlemuon) trigger and contain a tag electron (muon) candidate passing tight identification and isolation criteria. A probe track is required to pass the disappearing track criteria, excepting the reconstructed lepton veto for the flavor under study. The tag lepton and probe track are required to have opposite-sign electric charges and an invariant mass within 10 GeV of m Z .
To study these probabilities for τ h , Z→ττ candidate events are selected in which one τ decays via τ→eνν or τ→µνν, with the electron or muon serving as the tag lepton. The other τ in these events is selected as the probe track and, after applying the reconstructed electron and muon vetoes to it, the result is a sample of tracks dominated by τ h . The electron and muon selections are as described above, with two modifications for the case of τ h . To reduce contamination from W→ ν events, the transverse mass m T = √ 2p T p miss T (1 − cos ∆φ) is required to be less than 40 GeV, where p T is the magnitude of the tag lepton's transverse momentum and ∆φ is the difference in φ between the p T of the tag lepton and the p miss T . In addition, because τ leptons from the Z decay are not fully reconstructed, the invariant mass of the tag-probe pair is required to be in the range For each T&P study of P veto (electrons, muons, and τ h ), the number of selected T&P pairs before and after applying the relevant flavor of the reconstructed lepton veto are labeled N T&P and N veto T&P , respectively. To subtract non-Z boson contributions from the opposite-sign T&P samples, the selections above are repeated but requiring instead that the tag lepton and probe track have the same sign for their electric charges, yielding the quantities N SS T&P and N veto SS T&P . The probability that a lepton candidate is not explicitly identified as a lepton is then given by: The results obtained for P veto are summarized in Table 1.
P off
The probability of satisfying the second condition, P off , is defined as the conditional probability of a single-lepton event to pass the offline requirements of p miss, µ / T > 120 GeV and |∆φ(leading jet, p miss, µ / T )| > 0.5, given that the lepton candidate is not explicitly identified as a lepton. The p miss, µ / T of events with an unidentified lepton is modeled by assuming the lepton contributes no calorimeter energy to the event, replacing p miss, µ / T with the magnitude of p miss, µ / T + p T . This modification is applied in single-lepton control samples for each flavor, defined as containing data events passing single-lepton triggers and having at least one tag lepton of the appropriate flavor. In the case of muons, no modification of p miss, µ / T is made as they are already excluded from its calculation. The quantity P off is estimated for each lepton flavor by counting the fraction of single-lepton control sample events with p miss, µ / T > 120 GeV and |∆φ(leading jet p miss, µ / T )| > 0.5, after modifying p miss, µ / T in this way. For electrons and muons, P off is approximately 0.7-0.8, and approximately 0.2 for τ h .
P trig
The probability of satisfying the third condition, P trig , is defined as the conditional probability that a single-lepton event passes the trigger requirement, given that the lepton candidate is not identified as a lepton and the event passes the offline requirements of p miss, µ / T > 120 GeV and |∆φ(leading jet p miss, µ / T )| > 0.5. In the single-lepton control samples used to measure P off , the efficiency of the trigger requirement is calculated as a function of p miss, µ / T . The trigger efficiency is then multiplied bin-by-bin by the magnitude of p miss, µ / T + p T , described above for P off . The fraction of events in this product that survive the requirement of p miss, µ / T (modified) > 120 GeV is then the estimate of P trig . The value of P trig is approximately 0.3-0.6 for all lepton flavors.
P HEM
The probability of satisfying the fourth condition, P HEM , is defined as the conditional probability that a single-lepton event survives the HEM veto, given that the lepton candidate is not explicitly identified as a lepton and the event passes both the offline and trigger requirements. This probability is calculated in the sample of events forming the numerator of P trig . Because the HEM veto is applied only in the 2018 B data set, P HEM is fixed to unity in the other datataking periods. The value of P HEM is approximately 0.8 for all lepton flavors.
Charged lepton background estimation
The product of these four conditional probabilities gives the overall probability for an event with a charged lepton to pass the search selection criteria. These probabilities are measured separately for each flavor and within each signal category of n lay . To normalize these probabilities to form the background estimate, the number of events with a charged lepton of each flavor (N ctrl ) is counted by selecting events passing single-lepton triggers and containing a lepton of the appropriate flavor with p T > 55 GeV. A final consideration is that the efficiencies of the single-lepton triggers ( trigger ) are not 100%, so corrections are applied to N ctrl to reflect the underlying number of events with each flavor lepton, including those that did not pass the related trigger. From the T&P samples used to study P veto , trigger is measured as the fraction of probe tracks satisfying the single-lepton trigger requirement of the N ctrl selection. The values are observed to be 84% in the case of electrons, 94% in the case of muons, and 90% in the case of τ h candidates. The estimated background from charged leptons is calculated using these components as In the case of the n lay = 4 and n lay = 5 signal categories, insufficient numbers of events are available for muons in the estimation of P HEM , and for muons and τ h in the estimation of both P off and P trig . Therefore, these quantities are estimated as the average over the inclusive category n lay ≥ 4. The dependence of these values on n lay for electrons is applied as a systematic uncertainty in these cases, described below in Section 6.1.
Spurious tracks
Because spurious tracks do not represent the trajectory of an actual charged particle, the combination of tracker layers with associated hits is largely random. The requirement of zero missing inner and middle hits greatly suppresses the probability of selecting a spurious track.
To measure the probability that an event contains a spurious track, two control samples containing Z→ee and Z→µµ decays, respectively, are selected as representative samples of SM events. The signal benchmark chosen does not contain Z bosons, so any candidate disappearing tracks observed in these control samples can reliably be labeled as a spurious track. Since spurious tracks generally do not point to the PV, the purity of the spurious tracks samples can be enhanced by replacing the nominal requirement of |d 0 | < 0.02 cm with a "sideband" selection, defined as 0.05 ≤ |d 0 | < 0.50 cm.
To normalize the sideband selection to the search region, the shape of the d 0 distribution is described with a fit to a Gaussian function with an added constant, for each control sample in the n lay = 4 category. The fit is made in the slightly restricted range 0.1 ≤ |d 0 | < 0.5 cm to remove any overlap with the signal region. A transfer factor ζ is then calculated as the ratio of the integral of the fit function in the signal region to that in the sideband. The value of ζ derived from the n lay = 4 category is applied to the n lay = 5 and n lay ≥ 6 categories because the event counts in these categories are not sufficient to observe a different d 0 distribution. Finally, the spurious track background is estimated as the raw probability for a control sample event to contain a sideband disappearing track candidate (P raw spurious ), multiplied by ζ and normalized to the number of events passing the basic selection (N basic ctrl ): N est spurious = N basic ctrl ζ P raw spurious .
This calculation is performed separately for each signal category of n lay for both Z→ee and Z→µµ control samples, using the Z→µµ estimate as the central value of the spurious track background estimate.
Systematic uncertainties in the background estimates
The lepton background estimates make the assumption that no visible energy is deposited in the calorimeters by leptons that are not explicitly identified. This is tested for electrons and τ h by allowing selected candidates to deposit 10 GeV in the calorimeters, the maximal value allowed by the requirement of E ∆R<0.5 calo < 10 GeV for candidate signal tracks. The modified p miss, µ / T is constructed as before, but now the calculation includes 10 GeV in the direction of the lepton momentum. This is applied separately for each n lay category for electrons, and in the inclusive n lay ≥ 4 category for τ h because of small sample sizes. This results in a 13-15% decrease in the electron background estimate and an 11-25% decrease in the τ h background estimate. These changes are taken as systematic uncertainties.
The available data in the n lay = 4 and n lay = 5 categories do not separately provide enough events to measure P off and P trig , nor P HEM used in the muon and τ h background estimates. Therefore we measure the values in the inclusive category n lay ≥ 4 instead. The effect of this averaging is estimated by comparing values obtained for these quantities in exclusive and inclusive n lay categories for the single-electron control sample, where there is adequate data to measure each. The differences in these values range between 1 and 11%. These values are applied as one-sided systematic uncertainties in the estimate of the background contribution from muon and τ h candidates for the n lay = 4 and n lay = 5 categories.
The spurious track background estimate relies on several assumptions. The first assumption is that the spurious track probability is independent of the underlying physics content of the event. This is tested by comparing the estimates obtained from the Z→ee and Z→µµ control samples. The differences in the estimates derived from these two control samples, included as systematic uncertainties, range from 0 to 200%. Even with the largest differences, the systematic uncertainties are insignificant compared to much larger statistical uncertainties.
The second assumption of the spurious track background estimate is that the projection of the d 0 sideband correctly describes the signal d 0 region. This assumption is tested by comparing the number of signal-like tracks (|d 0 | < 0.02 cm) in the Z→ee and Z→µµ control samples to the number projected from the sideband. Within the statistical and fit uncertainties, the projected number of tracks agrees well with the observed signal-like counts, so no systematic uncertainty is applied.
The third assumption of the spurious track background estimate is that it is independent of the definition of the d 0 sideband. The validity of this assumption is examined by defining nine alternative, disjoint sidebands of width 0.05 cm instead of the single sideband region of width 0.50 cm. The spurious track estimate is determined for each of these. The observed deviations of these estimates are well within statistical fluctuations of the nominal estimate. Therefore, no systematic uncertainty is introduced to cover these differences.
The uncertainty in ζ due to the fit procedure is evaluated by varying the fit parameters within ±1 standard deviation of their statistical uncertainties, and comparing the resulting values of ζ. A variation of ±(43-52)% from the nominal value is found, and this variation is taken as an estimate of the contribution from this source to the overall systematic uncertainty in the spurious track background.
The systematic uncertainties in the background estimates are summarized in Table 2.
Systematic uncertainties in signal selection efficiencies
Theoretical uncertainties in the chargino production cross section arise from the choice of factorization and renormalization scales and from uncertainties in the PDFs used. These effects result in an assigned uncertainty in the expected signal yields of 2-9%, depending on the chargino mass. .3%, respectively) by comparing the efficiency of each between data and simulation in a control sample of single-muon events. The uncertainty in the efficiency of the E ∆R<0.5 calo requirement is taken to be the difference between the efficiencies obtained from data and from simulation in the Z→µµ control sample (0.4-1.0%), where the tracks are expected to be predominantly spurious. The uncertainty in the track reconstruction efficiency is evaluated to be 2.1% in 2017 data [45] and 2.5% in 2018 data [46].
The efficiency of the reconstructed lepton veto in simulated events depends on the modeling of detector noise, which may produce calorimeter or muon detector hits that result in a lepton candidate and thereby reject the track. The differences in reconstructed lepton veto efficiencies between data and simulation are studied by estimating the efficiencies relative to tighter lepton criteria, for which detailed scale factors are available, in the sample of events used to measure P veto for the electron and muon backgrounds. Differences between estimates from data and simulation of up to 0.1% are observed, and these are taken into account as systematic uncertainties.
Statistical uncertainties in trigger efficiencies for data and simulation are estimated to be 0.4% for each n lay category, and are applied as systematic uncertainties. In the case of short tracks (n lay = 4 and n lay = 5), no source in data is available outside of the search region to measure the efficiency of the track leg of the trigger requirement, which requires at least five tracker hits associated with the track at HLT. To study this requirement's effect, the trigger efficiency is measured for signal events in each search category as a function of p miss, µ / T , and the differences between n lay ≥ 6 and n lay = 4 (5) efficiencies are used to define weights for the n lay = 4 (5) category. These weights are not applied to the nominal signal yield, but are used to evaluate a conservative systematic uncertainty. The weighted signal yields are compared to the nominal, unweighted values, resulting in an average systematic uncertainty of 1.0% (0.3%) for the n lay = 4 (5) category.
The systematic uncertainties in the signal efficiencies are summarized in Table 3.
Results
The expected number of background events and the observed number of events are shown in Table 4 for each event category and each data-taking period. The observations are consistent with the expected total background. Upper limits are set at 95% confidence level (CL) on the product of the cross section and branching fraction for each signal model. These limits are calculated with an asymptotic CL s criterion [47][48][49] that uses a test statistic based on a profile likelihood ratio and treats nuisance parameters in a frequentist context. Nuisance parameters Table 3: Summary of the systematic uncertainties in the signal efficiencies. Each value listed is the average across all data-taking periods, all chargino masses and lifetimes considered, and wino and higgsino cases. The values given as a dash are negligible.
Source
Uncertainty n lay = 4 n lay = 5 n lay ≥ for the theoretical uncertainties in the signal cross sections, integrated luminosity, and signal selection efficiencies are constrained with log-normal distributions. The uncertainties in the background estimates are estimated separately for spurious tracks and for reconstruction failures of each flavor of charged leptons, and are treated as independent nuisance parameters. Uncertainties resulting from limited control sample sizes are constrained with gamma distributions, whereas those associated with multiplicative factors or discussed in Section 6.1 are constrained with log-normal distributions. The three n lay categories are treated as independent counting experiments, as are the data-taking periods 2017, 2018 A, and 2018 B. In the case of electroweak production with a wino LSP, the results of this search are combined with the previous search presented by CMS, based on data collected in 2015 and 2016 [17]. All data-taking periods are treated as completely uncorrelated and are considered as independent counting experiments. Systematic uncertainties are measured independently for each period and treated as uncorrelated nuisance parameters, with the exception of uncertainties in the signal cross section, which are treated as 100% correlated.
The expected and observed upper limits on the product of cross sections of electroweak production and branching fractions in the wino LSP case are shown in Fig. 1 for four chargino lifetimes. Two-dimensional constraints derived from the intersection of the theoretical predictions with the expected and observed upper limits, for each chargino mass and mean proper lifetime considered, are shown in Fig. 2 for a purely wino LSP and in Fig. 3 for a purely higgsino LSP. Figure 3: The expected and observed constraints on chargino lifetime and mass for a purely higgsino LSP in the context of AMSB, where the chargino lifetime is explicitly varied. Following Ref. [29], the branching fractions are taken to be 95.5% for χ ± 1 → χ 0 1,2 π ± , 3% for χ ± 1 → χ 0 1,2 eν, and 1.5% for χ ± 1 → χ 0 1,2 µν in the range of chargino masses of interest, with equal branching fractions and production cross sections between χ 0 1 and χ 0 2 . The region to the left of the curve is excluded at 95% CL. The prediction for the chargino lifetime from Ref. [50] is indicated as the dashed line.
is the first to constrain chargino masses with a higgsino LSP obtained with the disappearing track signature.
Summary
A search has been presented for long-lived charged particles that decay within the CMS detector and produce a "disappearing track" signature. In the sample of proton-proton collisions recorded by CMS in 2017 and 2018, corresponding to an integrated luminosity of 101 fb −1 , 48 events are observed, which is consistent with the expected background of 47.8 +2.7 −2.3 (stat) ± 8.1 (syst) events. These results are applicable to any beyond-the-standard-model scenario capable of producing this signature and, in combination with the previous CMS search [17], are the first such results on the complete Run 2 data set, corresponding to a total integrated luminosity of 140 fb −1 .
Two interpretations of these results are provided in the context of anomaly-mediated supersymmetry breaking. In the case of a purely higgsino neutralino, charginos are excluded up to a mass of 750 (175) GeV for a mean proper lifetime of 3 (0.05) ns, using the 2017 and 2018 data sets. In the case of a purely wino neutralino, charginos are excluded up to a mass of 884 (474) GeV for a mean proper lifetime of 3 (0.2) ns. These results make use of the upgraded CMS pixel detector to greatly improve sensitivity to shorter particle lifetimes. For chargino lifetimes above approximately 0.1 ns, this search places the most stringent constraints on direct chargino production with a purely wino neutralino obtained with the disappearing track signature. For a purely higgsino neutralino, these constraints are the first obtained with this signature.
Acknowledgments
We congratulate our colleagues in the CERN accelerator departments for the excellent performance of the LHC and thank the technical and administrative staffs at CERN and at other CMS institutes for their contributions to the success of the CMS effort. In addition, we gratefully acknowledge the computing centers and personnel of the Worldwide LHC Computing Grid for delivering so effectively the computing infrastructure essential to our analyses. [20] CMS Collaboration, "The CMS trigger system", JINST 12 (2017) P01020, doi:10.1088/1748-0221/12/01/P01020, arXiv:1609.02366.
|
2015-01-22T21:33:15.000Z
|
2014-11-21T00:00:00.000
|
{
"year": 2014,
"sha1": "f32f801920e3e37d7e81209af3baba4001297daa",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/JHEP01(2015)096.pdf",
"oa_status": "GOLD",
"pdf_src": "Arxiv",
"pdf_hash": "b6f4e48316edab3977e6d332054d899aae53d2fd",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
218503662
|
pes2o/s2orc
|
v3-fos-license
|
The Role of Engagement in Teleneurorehabilitation: A Systematic Review
The growing understanding of the importance of involving patients with neurological diseases in their healthcare routine either for at-home management of their chronic conditions or after the hospitalization period has opened the research for new rehabilitation strategies to enhance patient engagement in neurorehabilitation. In addition, the use of new digital technologies in the neurorehabilitation field enables the implementation of telerehabilitation systems such as virtual reality interventions, video games, web-based interventions, mobile applications, web-based or telephonic telecoach programs, in order to facilitate the relationship between clinicians and patients, and to motivate and activate patients to continue with the rehabilitation process at home. Here we present a systematic review that aims at reviewing the effectiveness of different engagement strategies and the different engagement assessments while using telerehabilitation systems in patients with neurological disorders. We used PICO's format to define the question of the review, and the systematic review protocol was designed following the Preferred Reported Items for Systematic Reviews and Meta-Analysis (PRISMA) guidelines. Bibliographical data was collected by using the following bibliographic databases: PubMed, EMBASE, Scopus, and Web of Science. Eighteen studies were included in this systematic review for full-text analyses. Overall, the reviewed studies using engagement strategies through telerehabilitation systems in patients with neurological disorders were mainly focused on patient self-management and self-awareness, patient motivation, and patient adherence subcomponents of engagement, that are involved in by the behavioral, cognitive, and emotional dimensions of engagement. Conclusion: The studies commented throughout this systematic review pave the way for the design of new telerehabilitation protocols, not only focusing on measuring quantitative or qualitative measures but measuring both of them through a mixed model intervention design (1). The future clinical studies with a mixed model design will provide more abundant data regarding the role of engagement in telerehabilitation, leading to a possibly greater understanding of its underlying components.
INTRODUCTION
In the field of neurorehabilitation, one of the main objectives after a brain or nerve injury is to develop rehabilitation strategies directed at the recovery of functional skills by enhancing neuroplasticity (2). Even though the type of intervention, intensity, and number of sessions are known to be important in task-specific rehabilitation trainings (3), it is known that the role of engagement is key for enhancing neuroplasticity, and to facilitate functional recovery in patients with neurological disorders (2,4). In this regard, some studies observed that by increasing patients' attention and interest toward rehabilitation training, there is an updating and modification at a neurological level, which leads to improving functional outcomes (5). However, to achieve such positive functional outcomes in neurorehabilitation, the nervous system has to be engaged and challenged (5,6). From a neurobiological point of view, several studies have shown how engagement may increase neural activity in different cortical areas such as (2) the orbitofrontal regions, that integrate information from sensory and motivational pathways to generate pleasure, (3) the ventral striatal dopaminergic systems, and (4) the anterior cingulate cortex, which holds attention during demanding task execution (7). Even though there are not enough studies using neuroimaging techniques to demonstrate the effects of engagement in neuroplasticity for rehabilitation, a large amount of studies using mental practice techniques, enriched environments, and attentional and motivational strategies in which patients become active actors of the rehabilitation training, corroborates the relationship between engagement and neuroplasticity (8)(9)(10). Concerning this, the growing development of technology in the last decade lead to the introduction of new digital systems in rehabilitation through which it is possible to provide different sensory stimuli enhancing patients' resources such as attention and motivation. Thus, digital technologies in rehabilitation are directed to providing information and/or support emotional, behavioral, or physiological features of the pathology within an enriched and stimulating environment (11)(12)(13)(14). One interesting feature of digital technologies in rehabilitation is the opportunity to apply technology-based interventions to provide a rehabilitation service through digital and telecommunication technologies during the hospitalization period, or at home after discharge from the hospital (15). Such application of digital technologies for rehabilitation is commonly known as telerehabilitation (16). Moreover, through telerehabilitation systems is possible to engage patients by providing them an online (or offline) feedback of their outcomes through a double communication loop (17,18). This type of communication combines remote monitoring of patients' performance with clinicians' appropriate responses by adapting and personalizing the planned rehabilitation activities, and empowering patients toward the targeted rehabilitation aim (18,19). Further, through these types of telerehabilitation systems, clinicians can supply the needs of the patients in long-lasting rehabilitation programs after the hospitalization period, allowing them to remain involved in social and productive life even though of their clinical condition (17).
Moreover, through telerehabilitation systems clinicians have the possibility of delivering long rehabilitation trainings in an enriched digital environment at patients' homes while saving a big amount of sanitary costs (20). Thus, the use of telerehabilitation systems can enhance the patients' engagement by conducting their rehabilitation training at home. However, how to enhance engagement and what engagement is when using telerehabilitation systems in patients with neurological disorders is not clear enough. Due to this, the following section aims to clarify some components and subcomponents of engagement at a clinical level.
Patient-Centered Medicine and Engagement
When we refer to patient engagement in the clinical field, we have to refer to patient-centered medicine (PCM). These two concepts are associated given that PCM considers a patients active participation in the clinical process as pivotal, instead of only considering the clinical professionals' point of view (21). In that context, patient engagement was considered as a concept to qualify the exchange between patients' demands and clinicians' supplies (22). Further, in healthcare, the term "engagement" came to indicate a renewed partnership between patients and healthcare providers (23). Then, the main goal of engaging patients in their clinical process can be identified in making them conscious of the management of their health status and illness, and to provide more positive outcomes in healthcare (24). Indeed, during the clinical process, patient engagement is a key factor in making them feel like participants in the therapeutic process that will lead to better adherence to the therapy, patient sensitization, and patient knowledge and empowerment (25). Even though the term "engagement" seems clear enough by itself, it involves different factors that have to take into consideration when engaging patients in a therapeutic process. Specifically, the involved factors in engagement are the following: participation and decision making, compliance and adherence, self-management, patient empowerment, and patient activation.
Participation and Decision Making
One of the main objectives for the improvement of the quality of health services defined by Entwistle and Watt (26) is the ability to involve patients in their therapeutic process by collaborating with the healthcare professionals. Two main factors have been defined for involving patients in clinical practices: patient participation and patient decision making. The first, patient participation, is considered a psychological component that focuses on identifying emotional and cognitive factors to enhance the active participation of the patients in clinical decision making (27). The second one is centered on the clinical and relational skills of the healthcare professionals in involving patients in clinical decisions (28,29). Altogether, when referring to engagement in a clinical context, one intends to increase the communication between clinicians and patients to motivate patient participation throughout the clinical process. That means, giving the patients enough information about their illness to become more independents in their healthcare routine. Then, an engaged patient is a patient that can participate in the clinical decision making and healthcare routine, but also a patient able to actively participate in the global healthcare system promoting new forms of assistance, for example by using new technology systems (30).
Compliance and Adherence
Other factors embedded in patient engagement are "compliance" and "adherence" that refer to the adaptive behaviors of patients in following medical prescriptions or in following the healthcare routine (31). Although these two factors are often presented together, there are some differences between them. While "compliance" is related to patients' ability in adapting their life routine with a more passive/dependent attitude to the clinicians' indications (32), "adherence" is related with patients participation as an active actor in the communication exchange with the clinicians in which patients' and clinicians' plan together the patients care routine (33). Hence, the level of compliance and adherence to the clinical process depend on patients' attitudes and behaviors in accepting or disagreeing with the clinicians' prescriptions, moving the concept of patients' engagement toward a balance between patients' demands and clinicians' supplies (30).
Self-Management, Patient Empowerment, and Activation
Self-management is referred to as the patients' ability to manage symptoms, treatments, psychological, and psychosocial consequences of their pathological condition, as well as the ability to manage the cognitive, behavioral, and emotional responses, derived from their clinical condition, to reach a satisfactory quality of life (34,35). Indeed, self-management is considered a positive outcome of patient engagement during the clinical process. Moreover, patient empowerment is also considered an important positive outcome during the patient engagement process. It is known that the term "empowerment" refers to psychological resources through which patients can control their clinical condition and the related treatments (36,37). Thus, by providing the patients an educational healthcare process, they can recover agency and beliefs of self-efficacy over their health condition increasing their autonomy at the same time (38). Even though the concept of "empowerment" and the concept of "engagement" are strongly related, "empowerment" is considered an outcome of a mainly cognitive boosting process of patients, related to their knowledge of the clinical condition, while "engagement" also sustains the emotional aspects regarding to the acceptance of the patients clinical conditions and the behavioral skills to manage it (30). Finally, patient activation is related to the capacity of the patients in managing their clinical condition and the ability to interact with the healthcare system based on their level of knowledge (39,40). It is suggested that an increase in patient activation leads to an increase in healthy behaviors and adherence to the clinical process (23). Patient activation has been defined by Hibbard et al. (23) as composed of four phases: (1) the passive activation level, where patients are not aware of their role in their health management; (2) where patients starts to create their resources and knowledge about their health condition; (3) where patients can elaborate ad hoc responses to the problems related to their clinical condition; and (4) where patients can maintain their new lifestyle behaviors for long-term periods, even when they are under stressful situations. Then, following the later commented phases, Hibbard et al. created the patient activation measure (PAM) to assess patient activation (23).
Hence, patient engagement considers not only the clinical environment but also the non-clinical contexts such as patients' daily routines, activity routines, and the acceptance of their clinical condition outside the hospital, by exploring the dialogue between the supplies and demands of the healthcare services (41). Concerning this, the use of new digital technologies to achieve the patients' engagement during and after the hospitalization period has been proposed (42).
Technology for Patient's Engagement in Neurorehabilitation
Today the development of new technologies has paved the way for their use for clinical purposes, especially to enhance patients' engagement in their healthcare routine (43). Recently, it has been demonstrated that the use of new digital technologies can modulate the dimensions described by Seligman (44) for positive psychology. Digital technologies have been considered essential for illness prevention such as courage, futuremindedness, optimism, interpersonal skill, faith, work ethic, hope, perseverance, flow, and joy (42). In this regard, it is known that the use of virtual environments and serious games can induce positive emotional states, creating new virtual environments for human psychological growth and well-being (45). Following the model proposed by Frome (46), four factors have to be present to induce positive emotions by using such virtual or serious games: a narrative factor, by using roleplaying through which is possible to feel the emotions of the virtual character; game-playing factor, by providing the feeling of frustration or satisfaction when winning or losing the game; the simulation factor, meaning that the game has to provide engaging activities; and the aesthetics factor, referring to the artistic features of the game. These factors can promote engagement of the users by using different technological sources such as mobile e-health (47), and e-learning platforms (48), biofeedback systems (49), virtual reality systems (50,51), and playing videogames (45), at their own home.
In addition, new rehabilitation protocols, including the use of new technologies, have been developed in the neurorehabilitation field (52,53). Particularly, the use of new technologies in neurorehabilitation, such as telerehabilitation systems, allows the patients to continue with their healthcare process at home (19,54). In the field of neurorehabilitation, the rehabilitation and healthcare routine after the hospitalization period is complex, requiring a multidisciplinary coordination (55,56). Telerehabilitation systems in neurorehabilitation allow a large number of people with neurological disorders-who often have limitations due to limited mobility and to costs associated with travel-to continue with their healthcare process at their own home, minimizing the barriers of distance, time and costs, and receiving continued support by the clinicians remotely (57,58). The feasibility and efficacy of telerehabilitation systems in neurorehabilitation have been documented in patients with different neurological conditions such as patients in a post-stroke phase (59)(60)(61), Parkinson Disease (18,62,63), and Multiple Sclerosis (18,64). Nevertheless, the role of engagement and the different factors to engage patients with neurological disorders in the telerehabilitation training during the rehabilitation period have not yet been deeply investigated. Hence, this systematic review aims at reviewing the effectiveness of different engagement strategies and the different engagement assessments while using telerehabilitation systems in patients with neurological disorders.
METHODS
A systematic review of the scientific literature have been conducted in order to identify different engagement strategies, as well as studies reporting engagement assessment methods when using telerehabilitation systems in patients with neurological disorders. The systematic review protocol was designed following the Preferred Reported Items for Systematic Reviews and Meta-Analysis (PRISMA) guidelines (65).
Data Sources and Search Strategy
According to the PICO format to formulate the foreground question of this systematic review (66), the review question has been defined as, "in adults with neurological disorders, is the role of engagement for telerehabilitation interventions, compared to treatment as usual, effective in improving neurorehabilitation intervention." Bibliographical data was collected on July 4, 2019, by using the following bibliographic databases: PubMed, EMBASE, Scopus, and Web of Science. For each database, we used the following combination of research keywords: (1) ("engagement" OR "motivation" OR "activation" AND "telerehabilitation"); (2) ("engagement" OR "motivation" OR "activation" AND "telehealth"); (3) ("engagement" OR "motivation" OR "activation" AND "telemedicine"); (4) ("engagement" OR "motivation" OR "activation" AND "telecare"). See the detailed search strategy in Table 1. Only full-text available articles were included in our research (conference paper were excluded), studies citation were retrieved independently for each string of keywords across all databases. Finally, the first list of the collected studies during the bibliographic research was exported to Mendeley to remove duplicated studies. Then the list of studies without duplicates was imported to Rayyan (67) for the title and abstract screening, following the specified inclusion or exclusion criteria for study selection (see section Study Selection and Data Collection) by one reviewer (M.M.G). The final list of the selected studies was sent to leading experts in the field for suggestion and identification of any missing studies, and no studies were added.
Study Eligibility Criteria
The present review aims at reviewing the effectiveness of different engagement strategies and the different engagement assessments while using telerehabilitation systems in patients with neurological disorders. Then, the selected studies had to investigate engagement while using telerehabilitation systems in adult patients with neurological disorders. Bibliographical research was limited to studies using humans and written in English. Further, the selected studies had to accomplish the following inclusion criteria: (1) Telerehabilitation interventions must have been directed to engage patients in their healthcare routine. Interventions directed to engage other stakeholders such as medical staff, hospital managers, and others were excluded.
(2) Telerehabilitation interventions must have been directed to a group of patients, with a between or within-group study design. Single case studies have been excluded.
(3) Telerehabilitation interventions have been directed to assess one or more components of patient engagement.
Study Selection and Data Collection
One reviewer (M.M.G.) conducted the final selection of the studies for full text analyses. The following keywords were considered as inclusion criteria for selected articles in Rayyan (67): neurorehabilitation, neurological patients, patients, participation, adherence, self-management, empowerment, activation, telerehabilitation, telehealth, telemedicine, telecare, e-health. Further the following keywords were considered as exclusion criteria: no engagement, no neurological patients, animal studies, and review studies. Then, the final selected articles that accomplished the inclusion criteria were analyzed by three reviewers (M.M.G., M.M., and J.M.) for independently fulltext analyses. The final selected studies were discussed among the three reviewers in order to solve minor discrepancies about the study selection criteria that had been solved by consensus.
Risk of Bias Assessment
To the risk of bias assessment, the reviewers followed the guideline of the Cochrane Collaboration risk of bias tool according to the latest version of the risk of bias tool (RoB2) statement (68). All three reviewers (M.M.G, M.M, and J.M) independently evaluated the studies for risk of bias, and disagreements were resolved through consensus ( Table 2).
Study Selection
Seven thousand nine hundred and fifty three studies were found, including the above commented key words in section Data Sources and Search Strategy, and including the abovespecified inclusion criteria words (section Study Selection and Data Collection). After removing duplicate studies, a total of 4,618 studies were included for the title and abstract screening into the Rayyan software. Of 4,618 non-duplicate studies, 4,464 studies did not accomplish the described study eligibility criteria. Subsequently, 82 studies were selected for full-text analyses. Of the 82 full text analyzed studies, only 18 studies were identified as suitable with the above-described inclusion criteria. See Figure 1 for a flow diagram depicting the study selection process. Of 82 studies, only 18 studies included engagement strategies and engagement assessment either as a primary or secondary outcome after the telerehabilitation training in patients with neurological disorders.
Moreover, following the TiDER checklist for reporting research interventions (87), the following points have been reported in Table 4: (1) why (aim of the study), (2) what (materials), (3) who provided, (5) tailoring, and (6) intervention adherence. (2) Out of the eighteen analyzed studies, thirteen studies aimed at investigating the effectiveness, usability, feasibility, reliability, and acceptability of the telerehabilitation system (70-75, 77-79, 82-84, 86), one study aimed at investigating the sense of co-presence between the therapist and patients through the telerehabilitation system (69), three studies aimed at investigating changes in selfmanagement, self-determination, and self-motivation after the telerehabilitation period (76,81,86), and finally one study aimed at assessing possible changes in aphasia severity after the telerehabilitation period (80). (3) Five studies used a computerbased telerehabilitation system (69,(73)(74)(75); three studies used a tablet set-up as a telerehabilitation platform (70,71,78); three studies used patients smart phones applications for psychological or motor telerehabilitation programs (72,81,86); three studies used phones as a set-up for telephone-based telerehabilitation intervention (76,79,82); finally, three studies used an online web-platform as an internet-based telerehabilitation intervention (77, 80, 85). (4) Out of the 18 selected studies, nine studies involved therapists (physiotherapist, psychologist, medical, coach therapist) or medical doctors in the administration of the telerehabilitation program (69, 74-76, 79, 80, 82, 84, 85); four studies involved trained researchers in the administration of the telerehabilitation program (72,73,78,83), two studies described a patients self-administered telerehabilitation program (70,71), and three studies did not specify who was involved into the telerehabilitation program (77, 81, 86). (5) Out of the 18 analyzed studies, only three studies adjusted the difficulty levels of the telerehabilitation program automatically according to the progress of the patients among the rehabilitation period (69,73,74). (6) Out of the 18 analyzed studies, only one study did not assess adherence to the intervention (70). Among the other 17 studies, 11 studies used semi-structured or unstructured interviews to assess patients adherence to the telerehabilitation program (71,72,75,76,(78)(79)(80)(81)(82)(83)(84). Four studies used questionnaires (74,75,77,86), two studies used the assessment report collected from the mobile or tablet rehabilitation application (78,86), and one study used the online counseling feedback to assess patients adherence to the telerehabilitation program (85). In addition to the latter commented points, Table 4 shows more detailed information about the research intervention of each study.
Risk of Bias
All studies except five presented a high risk of bias in some of the assessed factors in this systematic review (74,76,79,82,86). Table 2 shows the results of the risk of bias assessment of this systematic review. All the studies included in this systematic review reported the sampling method. However, only five out of 18 studies presented a randomized control trial study design, including a control group for treatment comparisons (74,76,79,82,86). Ten studies presented an small sample size to represent the results obtained after the treatment period (69-73, 75, 80, 83-85). Five studies based their results on the analyses of interviews conducted to the patients without analyzing any other clinical measure for engagement assessment (71,75,(83)(84)(85). All the studies included in this review reported their allocation sample method and study design. However, 12 studies did not have used random allocation methods for the sample allocation and not included a control group in the study design (70).
Engagement Interventions in Teleneurorehabilitation
Once the final 18 studies included in this systematic review have been analyzed, the studies were divided in those in which engagement was considered a primary outcome of the telerehabilitation training (n = 11) (70-72, 76-79, 81, 82, 84, 85), and those in which engagement was considered a secondary outcome of the telerehabilitation training (n = 7) (69, 73-75, 80, 83, 86).
Engagement as a Primary Outcome
Most of the 11 analyzed studies aimed at investigating the patient engagement as a primary outcome through a telerehabilitation training in patients with neurological disorders. In specific those studies involving patients' self-management, self-awareness, and self-determination strategies to enhance active patients' participation in their healthcare routine, and providing patients' empowerment. Such engagement strategies have been included in the behavioral and cognitive dimension of engagement (88). Specifically, in the present systematic review, four studies directed to enhance the behavioral and cognitive dimension of engagement while using telerehabilitation systems have been found. For instance, a non-immersive virtual reality multitouch system had been used in 10 acquired brain injury patients (ABI) at home to treat self-awareness deficit (70). Particularly, patients were engaged in a self-awareness game consisting of answering questions related to knowledge (anatomical and pathological matters), reasoning (situational exercises), action (role-playing), or cohesion (jokes and sayings), in a competitive context (70). Further, in another study, the authors used a smartphone application for both the telemonitoring and telecoaching of 57 patients with multiple sclerosis (MS) (81). The study by D'hooghe et al. aimed at fostering patients' self-energy management and physical activity, decreasing the level of fatigue after physical activity. Regarding patients with MS, a web-based model (FACETS: Fatigue: Applying Cognitive-behavioral and Energy effectiveness Techniques to life Style) of service delivery from healthcare providers was also tested in 15 patients with MS to improve patients' behavioral and cognitive dimension of engagement (84). Further, an online video-chat platform was used as a pilot test telehealth intervention, grounded in self-determination theory, to enhance satisfaction, motivation, physical activity, and quality of life in adults with spinal cord injury (SCI) (n = 11) (85). Finally, an android application in a tablet together with a physiologic monitor was used as a telehealth system in 20 patients with PD to explore two different internet engagement trainings: a tele-coach assisted training (n = 10), and a self-regulated exercise training (n = 10) (78). Other frequent strategies used for engagement in telerehabilitation are those directed to enhance patients' adherence and compliance to the therapy. Concerning this, in this systematic review, one study used a mobile web portal (wbPRO) to evaluate patient-reported outcomes in terms of feasibility, reliability, adherence, and subject-perceived benefits in 31 patients with MS, to quantify the impact of MS-related symptoms on the perceived patients' well-being (77). Moreover, a more sophisticated telerehabilitation system (SENSE-PARK system) including a set of wearable sensors (three to be used during the day and one at night), a Wii Balance Board software, and a smartphone application was used at patients' home to assess the feasibility and usability of the system, in 22 patients with PD (72). Further, a web-based physiotherapy platform with weekly personal, conversational support was used in patients with MS (n = 45), compared to a usual home paper format protocol (n = 45) to explore the user experience and feasibility of a web-based intervention (82).
Finally, in this systematic review, two studies directed to investigate the emotional components of the engagement strategies when using telerehabilitation systems were also found. These types of engagement strategies are embedded into the emotional dimension of engagement (88), usually implemented by using telephone and email interviews. Particularly, two studies were directed to enhance the emotional dimensions of engagement (76,79). Specifically, in the study conducted by Houlihan et al., the therapists assessed the results obtained from a telephone-based health self-management intervention in patients with SCI (n = 42), compared with a usual care control group (n = 42). However, in the study conducted by Skolasky et al., the clinical staff involved in the study used motivational interviewing strategies to elicit and strengthen motivation for change in patients with MS (n = 31).
Engagement as a Secondary Outcome
Seven studies of this systematic review aimed to use telerehabilitation training for motor, cognitive, or logopedic interventions in patients with neurological disorders and to enhance patient engagement as a secondary outcome. Specifically, in this review, three studies were directed to investigate user experience, and system feasibility when using telerehabilitation systems for other neurorehabilitation proposes (73,83,86). As an example, in the study conducted by Ellis et al., they explored the preliminary effectiveness, safety, and acceptance of a mobile health (mHealth) application-a mediated exercise program-designed to promote sustained physical activity in 23 patients with PD. Moreover, in another study, the authors assessed the feasibility and potential clinical changes associated with telerehabilitation training for upper limb recovery, based in a robotic technology-supported arm, supported by a video-game training system in 24 patients with chronic stroke (73). Finally, De Vries et al. reported the opinion of 16 patients with PD when using a home-based system without video movement analysis (83).
Moreover, the other five studies aimed at investigating engagement as a secondary outcome when using telerehabilitation systems for neurorehabilitation proposes. Specifically, one study investigated changes in aphasia severity, communication-related quality of life, and participation, in 19 patients with aphasia while using the TeleGAIN telerehabilitation system (80). Moreover, another study investigated postural control and balance improvements after a 10-week of a virtual Kinect home-exercise program in 24 adults with MS, and assessed patients' adherence and motivation when using the telerehabilitation system as a secondary outcome (75). In one study conducted by Yeh et al., the authors tested a telerehabilitation system composed of two subsystems: a motor rehabilitation system and a telecommunication system to improve the mobility of patients with stroke and to motivate them to continue with the telerehabilitation training (69).
Finally, in another study, the effectiveness of a virtual realitybased telerehabilitation program for balance recovery in chronic stroke patients was assessed and compared to the usual rehabilitation training (74).
Engagement Assessment
Among the analyzed studies in this systematic review, the following main three assessment methods have been found to assess patient engagement: measurement scales, telephone basedinterviews, and paper diaries. Regarding the measurement scales in the study conducted by Lloréns et al. (70), the authors used the Self-Awareness Deficits Interview (SADI) scale (89), and the Social Skills Scale (SSS) (90). However, others used the Short Form-36 (SF-36) (91), and the Hospital Anxiety Depression Scale (HADS) (92) to assess engagement as a secondary outcome (81). Moreover, the Communication Life Scale and the communicative activities checklist were used in patients with aphasia to assess engagement as a secondary outcome (80). Finally, three scales directed to assess engagement as a primary outcome were used. The Intrinsic Motivation Inventory (IMI) (93), was used to assess the level of motivation in patients with stroke after the telerehabilitation period (73). The Patients Activation Measure (PAM) (23), was used to assess health self-management in patients with SCI (76). Finally, the Profile of Mood States (POMS) questionnaire (94) was used in patients with SCI or ABI after the telerehabilitation training period (69). Table 5 aims to summarize the different scale measures, and the aim of each engagement scale measure.
Engagement as a Primary Outcome
Regarding the outcomes observed in the analyzed studies which aimed to foster patient engagement as a primary outcome, we observed the following reported outcomes. The VR game proposed in the study conducted by Llorens et al., improved self-awareness and social cognition deficits in patients with ABI and PD after 8 months of a telerehabilitation training (70). Through a smartphone TeleCoach application, patients with MS increased activity and reduced fatigue levels after 12 weeks of training, improving patients' self-management (81). Moreover, another study demonstrated that by replicating rehabilitation group dynamics through a telerehabilitation system is possible to enhance patient engagement to the rehabilitation training in patients with MS (84). Regarding the use of telerehabilitation training in patients with stroke, one study showed that by using an iPad training stroke survivors experienced increased participation in therapeutic activities, increased socialization, as well as less inactivity and boredom (71). In addition to this, the results obtained in the study conducted by Nijenhuis et al. showed an increased motivation to participate in the rehabilitation training when using a remotely monitored training system at home (73). However, in another study conducted in patients with PD, the patients reported that direct feedback about the patients' health condition when using the telerehabilitation training system would help to increase patients' motivation (72). Another study showed that patients with PD benefit from a mobile biofeedback system that provides real feedback about patients' health conditions, and enhance patient engagement to the rehabilitation routine (86). Furthermore, in one study in which patients with stroke could feeling the sense of the co-presence of the therapist during the telerehabilitation training, the psychological state of the patients was improved (69). However, in contrast to the abovecommented studies, one study reported a reduction in patients' self-efficacy and willingness regardless of patients' fatigue after the telerehabilitation training (69).
Finally, one study highlighted the importance of building in conversations by weekly interviews with people with MS about expectations of exercise and its potential benefits, particularly with those patients whose physical and mental conditions may be deteriorating while using motor telerehabilitation systems (82). In this regard, another study reported that health behavior change counseling by telephone-based interventions could improve health outcomes during the first 12 months after the surgical procedure in patients operated of spinal stenosis, improving patient engagement to the rehabilitation program (79). Moreover, 6 months of a telerehabilitation period based in a telephonic intervention program showed a more significant change in PAM scores, as well as a higher decrease in social/role activity limitations, and improvements in services/resources awareness in patients with SCI (76). Further, another telerehabilitation training using an online video-chat platform increase autonomous motivation in patients with SCI (85).
Engagement as a Secondary Outcome
Regarding the outcomes observed in the analyzed studies which aimed to foster patient engagement as a secondary outcome, we observed the following reported outcomes. One study reported improvements in communication-related quality of life in patients with aphasia, and a decrease of the aphasia severity, which lead to an increase of patient engagement in communicative activities (80). Another study conducted by Palacios-Ceña et al. highlighted the following positive factors reported by patients with MS after using a Kinect telerehabilitation systems: (1) the Kinect training increased the level of independence of the patients; (2) the patients reported to can share their illness state with their relatives'; (3) the patients reported positive effects about the incorporation of a videogame for rehabilitation, and (4) the patients reported positive effects regarding the possibility of evaluating themselves through the feedback provided by the telerehabilitation system (75).
DISCUSSION
The engagement of patients in the rehabilitation process is considered a primary aim for worldwide healthcare interventions [see (95)]. Patient engagement is considered a key component in neurorehabilitation in order to promote greater neuroplastic changes and functional outcomes (2). In this concern, digital technologies have been considered as a useful resource for enhancing patients' participation, allowing them to have an active role in their healthcare process (96,97). The introduction of digital technologies in the field of neurorehabilitation has prompted the possibility to conduct the rehabilitation protocol at patients' homes (16,98). Thus, telerehabilitation protocols save time for the patient by reducing displacements to the hospital, and the clinicians can follow the patients after the hospital discharge from the hospital (16,98). However, which is the role of engagement when using tele-rehabilitation systems in neurorehabilitation? The here presented systematic review aims at reviewing the different engagement strategies and different engagement assessments while using telerehabilitation systems for neurorehabilitation. In this systematic review, the studies were first divided into those in which patients' engagement was considered a first outcome of the telerehabilitation training, and those in which engagement was considered a secondary outcome of the telerehabilitation training. Interestingly, more studies that considered patients engagement as a primary outcome of the telerehabilitation training (N = 11), compared to those that considered patients engagement as a secondary outcome (N = 7) were found. Particularly, most of the analyzed studies that were directed to enhance patients' engagement through telerehabilitation systems in neurorehabilitation, had been conducted during the last 4 years from 2015 to 2019 (70-72, 76-79, 81, 82, 84, 85). This data indicates that fostering patients' engagement through the use of new technologies in neurorehabilitation has been a matter of interest for several years. Interestingly, this data is in line with the systematic review conducted by Barello et al. (99), in which they looked for studies using e-Health interventions for patient engagement, and highlighted the necessity of conducting more studies investigating the use of new digital technologies to enhance patient engagement. The data collected in this systematic review confirms that there was a progressive increase in the use of new technologies to engage patients, specifically those with neurological disorders, into their rehabilitation process. Secondly, our results showed an increase in interest in creating new telerehabilitation protocols in neurorehabilitation for enhancing patients' engagement by promoting patients' selfawareness and self-management (N = 6), patients' motivation (N = 9), and emotional support (N = 9). Such engagement components have been described as components of the behavioral and cognitive dimension of patients' engagement (30). Thus, in this systematic review, the studies analyzed were directed at fostering the behavioral and cognitive dimension through the use of telerehabilitation systems in patients with neurological diseases. These findings are supported by other investigations that were also directed at fostering the behavioral and cognitive dimension of engagement during the rehabilitation process of different clinical populations (100,101). Concerning this, the results of this systematic review show that the use of telerehabilitation systems in patients with neurological disorders are useful for fostering the behavioral and cognitive dimension of engagement and for increase patients engagement with the rehabilitation program (73,77,78,81,84,86). One explanation of this could be that through the telerehabilitation systems it is possible to give a real feedback to the patients about their physical and physiological conditions, as well as the possibility to interact with the telerehabilitation system (70, 73-75, 78, 81, 83). Concerning this, the studies of this systematic review are consistent with later investigations that demonstrated the effectiveness of digital technologies in inducing behavioral, physiological, and emotional responses by giving an immediate real feedback about such responses to the patients (22,(102)(103)(104).
Moreover, such investigations were also directed at fostering the emotional dimension of the engagement, referring to the patients' acceptance of the disease, to an adequate adjustment to their illness (105), and improving the quality of the relationship between clinicians and patients (24). Specifically, in the analyzed studies of this systematic review, the emotional dimension of engagement has been tackled by using weekly telephonic interviews (72,76,84), using a face to face communication through on-line digital platforms (78,80,85), or by giving positive and motivating messages to the patients during the telerehabilitation training (78,81).
Regarding the assessment of engagement during the telerehabilitation training in neurorehabilitation, the studies analyzed in this systematic review show that, at the moment, there are few available scales to assess the level of patient engagement and to deeply assess the different components of engagement. However, some available measures providing quantitative data about patient engagement such as the PAM (23), IMI (93), and the SADI (89), and POMS questionnaire (94) scales are available. Out of these four measures scales, the newest and the most used one is the PAM, which, as described in Table 5, enables the assessment of the patient activation during their healthcare routine in-depth. Although the PAM seems one of better measures to assess patient engagement, the POMS questionnaire could be an excellent complement to further assess the emotional state of the patients in their daily healthcare routine and during the telerehabilitation period in patients with neurological disease. The SADI is limited to patients with traumatic brain injury, and this limits the use of this scale to assess self-awareness of the illness in patients with other neurological pathologies. Finally, the IMI could be replaced by the PAM, as this is the newest measure that contemplates more aspects of patient activation in comparison to the IMI. Further, the results obtained in the PAM can reflect patient motivation to participate in their healthcare routine. Besides the quantitative engagement measures, a significant amount of studies that use interviews and diary reports for the qualitative assessment of patient engagement when using telerehabilitation systems were found. In this regard, it is known that data from motivational interviews play an essential role in evaluating patient engagement during the rehabilitation period (106,107). Moreover, the efficacy of using semi-structured interviews to foster patients with chronic illness to participate in their healthcare routine has been demonstrated (108).
Finally, regarding the effectiveness of the engagement strategies used in the analyzed studies of this systematic review, 12 studies out of 18 reported positive outcomes in fostering patient engagement after the telerehabilitation training. In particular, the engagement strategies used in these 12 studies were mainly focused on patient participation, patient decision making, and patient self-management, all of them involved in the behavioral, cognitive, and emotional dimensions of engagement (see Table 6). Such positive results are in line with later studies in which a motivational model to foster participation in the neurorehabilitation programs was proposed (109). Moreover, others also proposed new neurorehabilitation strategies by enhancing patient self-management, self-awareness, and motivation in rehabilitation routines (2). Most of the revised studies in this systematic review presented positive results by enhancing the behavioral, cognitive, and emotional dimensions of patient engagement. However, most of them used a "monomethod" study design, directed at assessing qualitative or quantitative engagement outcomes.
LIMITATIONS
The present systematic review shows the following limitations regarding the standard protocols for systematic reviews: no registration in a public database, a librarian was not included in the bibliographic research stage, and no duplicate and independent searches of the studies were done.
CONCLUSIONS
The studies commented throughout this systematic review pave the way for the design of new telerehabilitation protocols, not only focusing on measuring quantitative or qualitative measures but measuring both of them through a mixed model intervention design (1). The future clinical studies with a mixed model design will provide more abundant data regarding the role of engagement in telerehabilitation, leading to a possibly greater understanding of its underlying components.
DATA AVAILABILITY STATEMENT
All datasets generated for this study are included in the article/supplementary material.
AUTHOR CONTRIBUTIONS
MM-G and OR developed the paper concept. MM-G carried out the bibliographic review, was responsible for the methodology, and wrote the manuscript draft. MM and JM contributed to the drafting of the manuscript. FB, FR, and PM gave bibliographic suggestions and reviewed the manuscript for important intellectual content. GR, FM, and OR supervised the editing and revisions for important intellectual content. All the authors approved the final version of the manuscript for submission.
|
2020-05-06T13:06:00.036Z
|
2020-05-06T00:00:00.000
|
{
"year": 2020,
"sha1": "c7eeb1db8ee74c5ecbf7cd1f4d14144947e41c24",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fneur.2020.00354/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "c7eeb1db8ee74c5ecbf7cd1f4d14144947e41c24",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Psychology",
"Medicine"
]
}
|
234395500
|
pes2o/s2orc
|
v3-fos-license
|
Faculty development program: Way to excellence
Introduction: The importance of faculty development programs (FDP) to improve teaching effectiveness has been emphasized in recent years. Our endeavors to improve teaching ways at Shifa College of Medicine, include development of student feedback mechanisms, professional development programs, and research into teaching. New trends taking place in academic medicine were accommodated by modification of faculty development model.
Methods: With an aim to assess the perceptions of faculty about FDP at Shifa College of Medicine we gathered views of faculty, by administering questionnaire, conducting focus group and individual interviews.
Results: More than half of faculty (51%-83%) agreed with various items related to teaching and learning concepts, 79% believed that they learned assessment methods. 73% agreed that it was a source of introduction to new educational strategies. Sixty-eight percent agreed that FDP helped to improve skills in teaching of ethics and professionalism. Results of focus group discussion show that faculty found program helpful in their grooming and development and it made them more knowledgeable. Views from individual interviews stated that faculty development program has contributed towards learning.
Conclusion: In conclusion FDP at Shifa College of Medicine is valued by faculty. It has contributed towards excellence in teaching. This program should be continued with an endeavor to improve it further.
A B S T R A C T
Introduction: The importance of faculty development programs (FDP) to improve teaching effectiveness has been emphasized in recent years. Our endeavors to improve teaching ways at Shifa College of Medicine, include development of student feedback mechanisms, professional development programs, and research into teaching. New trends taking place in academic medicine were accommodated by modification of faculty development model. Methods: With an aim to assess the perceptions of faculty about FDP at Shifa College of Medicine we gathered views of faculty, by administering questionnaire, conducting focus group and individual interviews. Results: More than half of faculty (51%-83%) agreed with various items related to teaching and learning concepts, 79% believed that they learned assessment methods. 73% agreed that it was a source of introduction to new educational strategies. Sixty-eight percent agreed that FDP helped to improve skills in teaching of ethics and professionalism. Results of focus group discussion show that faculty found program helpful in their grooming and development and it made them more knowledgeable. Views from individual interviews stated that faculty development program has contributed towards learning. Conclusion: In conclusion FDP at Shifa College of Medicine is valued by faculty. It has contributed towards excellence in teaching. This program should be continued with an endeavor to improve it further.
I n t r o d u c t i o n
The importance of faculty development programs to improve the teaching effectiveness has been emphasized in recent years. 1 These programs are developed to improve the standard of teaching 2,3 and mostly focus on enhancing the abilities of medical faculty as teachers. 1 Faculty development is defined by Wilkerson and Irby "as a tool for improving the educational vitality of our institutions through attention to the competencies needed by individual teachers and to the institutional policies required to promote academic excellence". 4 An effective faculty development program improves the quality of teaching. 5 Training of faculty can positively impact the
ORIGINAL ARTICLE
teaching competencies leading to improved teaching practices 6,7,8 Faculty development programs help to enrich the knowledge and skills of teachers. 9 Clinical skills and knowledge alone do not necessarily make a good teacher, therefore more programs focusing on teaching skills are required for medical teachers. 10 Faculty development program must address the needs of the participants 11 to ensure faculty participation and interest. 12 The areas to be addressed may be identified through formal need assessment 13 or through informal encounters with the faculty, taking the institutional goals into consideration. 14 Shifa College of Medicine (SCM), a constituent college of Shifa Tameer-e-Millat University has always laid a great emphasis on acquiring able and committed faculty, and a continuing program for professional growth and development for its faculty has remained a top priority for the college.
SCM shifted from discipline based to system based integrated modular curriculum which required training of the faculty for skills and competence to adopt new strategies. The faculty development program was modified, keeping in view the curricular philosophy of SCM, which is student centered, constructivist, collaborative, lifelong learning, Integrated/ clinical relevance and critical thinking. This study was designed to describe the evolution of faculty development program at Shifa College of Medicine and assess the views of the faculty on effectiveness of this program.
O b j e c t i v e s
The objectives of this study are to: 1. Describe the evolution of faculty development program (FDP) at Shifa College of Medicine.
2. Assess the views of the faculty on effectiveness of faculty development program.
M e t h o d s
Study was approved by the Institutional Review Board and Ethics Committee of the institution. Mixed methods approach was used to increase validity of the findings. Data regarding perceptions of the faculty on faculty development sessions was collected through questionnaire, focus group and interviews. A feedback questionnaire using five point Likert scale was administered to the junior and senior faculty. Focus group discussion was also conducted with multidisciplinary group of faculty members, and in addition individual interviews were conducted with senior faculty members.
R e s u l t s
Although faculty development program was initiated at Shifa College of Medicine in 1999, regular scheduling of the sessions was implemented in 2002. Last Saturday of every month is allocated for a two hours' duration faculty development seminar that is mandatory for all the faculty members. In the initial years of this program, most of the sessions included presentations on various professional and educational aspects. A move from subject based to system based integrated modular curriculum was directed towards interactive teaching and self-directed learning. The instructional approaches increased the emphasis on problem-solving, interpersonal skills and attitude. New trends and profound transformations taking place in academic medicine made remodeling of our faculty development program necessary. To accommodate these transitions major overall changes in faculty development were brought about and currently workshops and hands on activities have become predominant. Workshops are planned according to the needs identified during various sessions. Various faculty development seminars revolved around different themes which included teaching and learning concepts, needs assessment, technology, assessment (formative & summative), program evaluation, learning strategies, curriculum planning & development, quality in medical education, innovations (EBM, professionalism/ethics, humanities), medical research, community based education, learning environment, patient safety. Faculty was motivated to carry out research and this resulted in a significant number of scholarly publications in prestigious journals. Faculty was encouraged and supported to participate in various national and international conferences and to join postgraduate medical education programs. The evolution of faculty development program into a comprehensive, multilevel program helped in promoting excellence in teaching and research. These endeavors led to the development of a well-established department of health professions education, which functions to plan and organize the educational activities.
Questionnaire was administered to 92 faculty members which included both senior and junior faculty from multiple specialties using five point Likert scale. Strongly agree and agree were merged and strongly disagree and agree were also merged for the purpose of analysis.
More than half of the faculty (51%-83%) agreed with various items related to teaching and learning concept. Seventy-four per cent agreed that the FDP provided opportunities to improve basic facilitation skills. Fifty-five per cent said it was a source of motivation to improve academic qualifications. Eighty-three percent thought sharing of teaching experiences helped them learn and 68% agreed that discussions on student's feedback helped to reflect on one's performance. Seventy-four percent believed that it helped to reframe the traditional thinking of faculty. Fifty-one percent agreed that it encouraged change as an essential component for scholarship in teaching / learning process, and 68 % agreed that it helped to improve communication skills.
Fifty-eight percent thought it helped to identify their areas of improvement. Only 29% agreed with the fact that FDP improved skills in use of information technology and computer in education, 40% disagreed, whereas 32% remained neutral. Seventy-nine percent said they learned assessment methods through workshops. Sixty-seven percent agreed that it introduced learner centered teaching behavior, and helped to increase skills in collaborative teaching. Eighty percent and 73% respectively agreed that they learned various learning strategies and that they were introduced to new educational strategies. Only 32% said it helped to improve skills in teaching of bedside and clinical teaching, 33% remained neutral and 34% disagreed. Fifty-nine percent said it enhanced skills in curriculum planning and module design, 68% found it helpful in developing educational objectives and blue printing. Seventy-four percent believed that it provides opportunities to learn recent advances, emerging trends and issues in the field of medical education, and 57% found it a source of sharing experiences of national/international exposure. Sixty-five percent said it introduced them to evidence based medicine. Forty-two percent believed it promoted personal growth of faculty through literature, poetry and religion, 68% agreed that FDP helped to improve teaching skills of ethics and professionalism and 53% thought it motivated for research. Forty-four percent found it to enhance medical writing skills, research methodology, scientific and medical education research. Sixty-eight percent thought it promoted learning environment.
Only 20% agreed, 35% were neutral and 25% disagreed, regarding the role of FDP in community based education. Focus group was conducted with multidisciplinary faculty members. The participants believed that faculty development program at SCM has been a useful experience for them. Sessions on learning strategies and assessment were appreciated. Some of the workshops clearly made a significant difference in performance. This program was said to have helped in grooming and development of faculty, and in making them more knowledgeable in the field of medical education. They thought that the faculty shared their experiences and innovative ideas through this program. It promoted team work and helped the faculty reflect on their performance. Sessions on arts and humanities were appreciated by the faculty. It was believed that FDP has definitely contributed towards the progress of faculty; it helped to improve their teaching skills and has helped the new teachers to learn various methods of teaching.
Individual interviews were conducted with four senior faculty members. They believed that the objective of FDP was to train the faculty to deliver curriculum optimally, it is organized to familiarize the faculty with learner centered approach, keep up with new trends and improve teaching and assessment.
"I was introduced to new learning strategies and assessment."
When asked if they see the objectives being fulfilled they responded that some of the objectives have been achieved like teaching strategies and assessment methods while other objectives like promotion of research have only been addressed partially. The faculty was able to learn and switch over from traditional to modular curriculum with the help of faculty development program.
"I have learned from these sessions most of the time."
Regarding the challenges faced when FDP was started, they said that it was a challenge to keep the faculty interested and engaged during sessions. Initially there was resistance for a change in the modality of curricular delivery, but over a period of time orientation through FDP helped in decreasing the resistance and now there is more acceptance. In response to a question about the contribution of FDP towards teaching excellence, they said that it has definitely contributed towards teaching excellence. This program should continue and should be further improved. "It helps in grooming the faculty specially the new comers."
D i s c u s s i o n
Recently there has been a significant increase in number of medical colleges in the region, however maintaining the quality of medical education is a big challenge. 15 In addition to the implementation of measures for quality assurance in emerging medical schools, evaluation of the programs already adopted in established medical colleges is an essential component for maintaining the standard. We gathered views of the faculty about faculty development program at Shifa College of Medicine. The results show that the FDP was well received by our faculty. Similar findings were reported in a systematic review of faculty development initiatives designed to improve teaching effectiveness in medical education where faculty development programs were found to be rated high for satisfaction. 1 allies and Herman found a positive impact of educational development. 5 Traditionally the role of medical teacher has been to provide information to the students. The teacher of today is expected to be an efficient facilitator, curriculum and course planner, resource material creator, student assessor, mentor and program evaluator. 16 Harden and Crosby described roles of a teacher as information provider, resource developer, planner, assessor, facilitator and role model. 17 Our faculty development model helps to prepare the faculty for these roles. Most of our faculty agreed that they were provided opportunities to improve their basic facilitation skills. More than half of the faculty said they learned curriculum planning and module design. Our faculty acknowledged learning of assessment strategies through FDP. It is important for the teacher to be aware of the new teaching methodologies that are being practiced in modern world, which include a shift from conventional teaching to small group teaching, problem based learning, innovative curriculum models and changes in assessment methods and tools. 18 The evolutionary change in FDP over the years with change in curricular delivery strategies was taken positively. Majority of our faculty felt that they learned various learning strategies through FDP. For successful implementation of curricular reforms, it is necessary to prepare the faculty for new teaching and assessment methodologies. 19 Our FDP focused on preparing the faculty for teaching in integrated modular curriculum, which ensured smooth transition and the faculty agreed with usefulness of the program in this respect. Our findings show that the faculty perceive FDP to have improved the teaching and research skills of the faculty. This is in consistence with the findings of a systematic review and meta-analysis which shows significant impact of faculty development programs on the knowledge and skills of the faculty. 9 Teaching in medical schools is an important responsibility and with changing trends in medical education good faculty development initiatives have become a need of the day. 20 Our FDP provided an opportunity to learn the recent advances, emerging trends and issues in the field of medical education. It is suggested that role modeling is the best way to inculcate professionalism in students. 21 Our FDP included seminars on professionalism and ethics.
C o n c l u s i o n
Faculty development program at SCM is valued by the faculty. It has proved to be helpful in educating the faculty on innovative strategies and new trends in medical education, thus making them competent for efficient delivery of curriculum. It has contributed towards excellence in teaching. However constant improvement is an essential requirement to maintain high standards. This program should be continued with an endeavor to improve it further.
|
2021-05-13T00:04:01.355Z
|
2020-12-23T00:00:00.000
|
{
"year": 2020,
"sha1": "88c0a9ef27b8bd83a99275033bc0f4d472ef15e9",
"oa_license": "CCBYNCSA",
"oa_url": "https://j.stmu.edu.pk/ojs/index.php/jstmu/article/download/109/56",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "66dca43c8efef5d163b41833fc337053d0138a8e",
"s2fieldsofstudy": [
"Education",
"Medicine"
],
"extfieldsofstudy": [
"Psychology"
]
}
|
12905484
|
pes2o/s2orc
|
v3-fos-license
|
Distance rationalization of social rules
The concept of distance rationalizability of social choice rules has been explored in recent years by several authors. We deal here with several foundational questions, and unify, correct, and generalize previous work. For example, we study a new question involving uniqueness of representation in the distance rationalizability framework, and present a counterexample. For rules satisfying various axiomatic properties such as anonymity, neutrality and homogeneity, the standard profile representation of input can be compressed substantially. We explain in detail using quotient constructions and symmetry groups how distance rationalizability is interpreted in this situation. This enables us to connect the theory of distance rationalizability with geometric concepts such as Earth Mover distance and optimal transportation. We expect this connection to prove fruitful in future work. We improve on the best-known sufficient conditions for rules rationalized via votewise distances to satisfy anonymity, neutrality, homogeneity, consistency and continuity. This leads to a class of well-behaved rules which deserve closer scrutiny in future.
in a single analysis), and explicitly distinguish several concepts that have sometimes been conflated in previous work. In Section 3 we give necessary and sufficient conditions for a rule to be distance rationalizable, improving slightly on results of the abovementioned authors. We pose an interesting question regarding uniqueness of representation in the DR framework, which does not appear to have been noticed before. We give a counterexample in Section 3.2.
In Section 4 we explain how equivalence relations and symmetries between elections allow us to describe DR rules more compactly, and make the connection between the original profile-based definitions and the quotient representations explicit. The distinction between compatible and totally compatible distances is important and new, and the idea of a distance being simple with respect to an equivalence relation is also new as far as we know. None of our results in this section rely on the distance being votewise and are proved for general consensuses; the applications therefore generalize results of Elkind, Faliszewski and Slinko [5]. In particular, we make the connection between ℓ 1 -votewise distances and the Earth Mover distance, relating the subject of distance rationalization of anonymous rules to the theory of optimal transportation and maximum weight matchings. We believe that this new connection will prove fruitful in future work.
We apply the above results to neutrality and anonymity, obtaining complete characterizations in Propositions 4.28 and 4.36. In Section 5 we deal with homogeneity, which is not quite covered by the results on groups. Our approach shows that the reason Dodgson's rule is not homogeneous is because the equivalence relation is induced by the action of a monoid ("group without inverses") that is not a group.
Specializing to votewise distances, we concentrate in Section 6 on what we term the Votewise Minimizer Property, which is a way of requiring the consensus and distance to combine well. This allows us to give improved sufficient conditions for DR rules to satisfy homogeneity, consistency, and continuity.
Basic definitions
We use standard concepts of social choice theory. Not all of these concepts have completely standardized names. We shall need to deal with several candidate and voter sets simultaneously, which explains the generality of our definitions. However in many cases it suffices to deal with a fixed finite voter and candidate set. Definition 2.1. We fix an infinite set C * = {c 1 , c 2 , . . . , . . . } of potential candidates and an infinite set V * = {v 1 , v 2 , . . . , } of potential voters. Let C ⊆ C * . For each s ≥ 1, an s-ranking is a strict linear order of s elements chosen from C. The set of all s-rankings is denoted L s (C). When C is finite and s = |C|, we write simply L(C). When s = 1, we identify L 1 (C) with C in the natural way.
Remark 2.2. When C is finite, of size m say, the set L s (C) consists of strict linear orderings of C and has size m(m − 1) · · · (m − s + 1). By fixing a default linear ordering on C, we can interpret elements of L s (C) as partial permutations of C in the usual way. Definition 2.3. A profile is a function π : V → L(C) where V ⊂ V * and C ⊂ C * are finite. We denote the set of all profiles by P. An election is a triple (C, V, π) with π ∈ P and π : V → L(C). We denote the set of all elections with fixed C and V by E(C, V ), and the set of all elections by E.
Remark 2.4. By definition π(v) ∈ L(C) for each v ∈ V . If C is linearly ordered as described above, then π(v) −1 denotes the inverse permutation, and for each c ∈ C, r(π(v), c) := π(v) −1 (c) gives the rank of c in v's preference order.
Of course, C and V are implicit in the definition of π, so strictly speaking an election is completely determined by a profile. We distinguish the two concepts because we sometimes want to deal with several different voter or candidate sets at the same time, and because C is not really completely determined -any superset of C would also work. Definition 2.5. A social rule of size s is a function R that takes each election E = (C, V, π) to a nonempty subset of L s (C). When there is a unique s-ranking chosen, the word "rule" becomes "function". When s = 1, we have the usual social choice function, and when s = m the usual social welfare function.
For each subset D of E we can consider a partial social rule with domain D to be defined as above, but with domain restricted to D. We denote the domain of a partial social rule R by D(R). If R and R ′ are partial social rules such that D(R) ⊆ D(R ′ ) and R(E) = R ′ (E) for all E ∈ D(R) then we say that R ′ extends R.
Remark 2.6. Most previous work has dealt only with the cases s = 1 and s = m.
2.1. Consensus. Intuitively, a consensus is simply a socially agreed unique outcome on some set of elections. We now define it formally. Definition 2.7. An s-consensus is a partial social function K of size s. The domain D(K) of K is called an s-consensus set and is partitioned into the inverse images K r := K −1 ({r}).
Remark 2.8. Note that we allow K r to be empty. This happens rarely for natural rules in the distance rationalizability framework, because it implies that there is no election for which r is the unique social choice. However it is technically useful and allows us to deal with varying sets of candidates.
It often makes sense to ensure coherence between the various values of s for which we formalize a given consensus notion.
Definition 2.9. Let K be a 1-consensus. For each s we define an s-consensus K (s) (the srestriction of K) as follows. For each candidate c, K c is defined. Given E = (C, V, π) ∈ E, define E −c to be the election (C \ {c}, V, π ′ ), where π ′ is obtained from π by erasing c from each ranking.
Let D 2 be the set of all elections E such that both E = (C, V, π) and E −c both belong to the domain of K, where c = K(E). Letting c ′ = K(E −c ), define K (2) on D 2 by its output, the 2-ranking cc ′ . Continue by induction, reducing the domain at each step if necessary, and output a single s-ranking.
Several specific consensuses have been described in the literature. Here we unify the presentation of several of the most common ones. Definition 2.10. (qualified majority consensus) Let 1/2 ≤ α < 1. The (α, s)-majority consensus S (α,s) is the s-consensus with domain consisting of all elections with the following property: there is some fraction p > α of the voters, all of whom agree on the order of the top s candidates. The consensus choice is this common s-ranking.
Special cases: • When α = 1/2, we obtain the usual majority s-consensus M s .
• The limiting value as α → 1 gives the case of unanimity. We denote this by S s . When s = |C|, we simply write S (called the strong unanimity consensus), whereas when s = 1, for consistency with previous authors we denote it W, the weak unanimity consensus.
Remark 2.11. In general, the s-restriction of S α,1 is not S α,s : if a majority rank a first, and a majority of those rank b above all candidates other than a, it is not necessarily the case that a majority of votes have ab at the top (the fraction is more than 2α − 1, however). However, a majority of the original voters rank b either first or second. The s-restriction is the consensus for which more than fraction α of voters agree on the top candidate, more than α agree on the top two, etc. However, S s is indeed the s-restriction of W: if all voters rank a first and all rank b over all candidates other than a, then all agree on the ranking ab, etc. Definition 2.12. (qualified Condorcet consensus) Let 1/2 ≤ α < 1. The α-Condorcet consensus C α has domain consisting of all elections for which an α-Condorcet winner exists. That is, there is a (necessarily unique) candidate c such that for any other candidate c ′ , a fraction strictly greater than α of voters rank c over c ′ .
We define C (α,s) to be the s-restriction of C α . Special cases: • When α = 1/2 we denote this by C, the usual Condorcet consensus.
2.2.
Distances. We require a notion of distance on elections. We aim to be as general as possible.
A metric is a distance that is both a quasimetric and a pseudometric. We call a distance standard if d(E, E ′ ) = ∞ whenever E and E ′ have different sets of voters or candidates (this term has not been used in previous literature).
Example 2.14. Let d del (E, E ′ ) (respectively d ins (E, E ′ )) be defined as the minimum number of voters we must delete from (insert into) election E in order to reach election E ′ (or +∞ if E ′ can never be reached). Each of d ins and d del is a nonstandard quasimetric.
Example 2.15. (shortest path distances) Consider a digraph G with nodes indexed by elements of E, and some edge relation between elections. Define d to be the (unweighted) shortest path distance in G. This is a quasimetric. It is a metric if the underlying digraph is a graph. For example, d H , d K , d ins , d del are defined via essentially this construction. Note that it suffices to specify for which E, E ′ we have d(E, E ′ ) = 1 in order to specify such a distance, and not every quasimetric is a shortest path distance, even after scaling by a constant, because if there are two points at distance 3 there must also be points at distance 2, for example. Example 2.16. (some strange distances) The following distances will be useful for existence results later. Let R be a rule.
The first is a metric used by Campbell and Nitzan [1]. Define d as follows.
We claim that d is a metric. The only non-obvious axiom is the triangle inequality. It suffices to consider the case where E, The second distance is a variant of the first, where instead we define d(E, E ′ ) = 0 if and only if E = E ′ or R(E) = R(E ′ ) and |R(E)| = 1. This is a pseudometric, since elections with the same unique winner are at distance zero. To prove the triangle inequality, first note that R(E, E ′ ) ≤ 1 if and only if R(E) ⊆ R(E ′ ) and |R(E)| = 1, or the analogous condition with E and E ′ exchanged holds. If 2 ≥ d(E, E ′′ ) > d(E, E ′ ) + d(E ′ , E ′′ ), then at least one of the two terms on the right is 0 and the other is at most 1. Thus (without loss of generality) E and E ′ have a common unique winner under R and R(E ′ ) ⊆ R(E ′′ ), yielding the contradiction The third distance is the shortest path metric defined as follows: there is an edge joining E and E ′ if and only if |R(E ′ )| = 1 and R(E ′ ) ⊂ R(E), or the same with E and E ′ exchanged (these are the same as the cases defining d(E, E ′ ) = 1 in the definition of the Campbell-Nitzan distance).
2.2.1. Votewise distances. One commonly used class of distances consists of the votewise distances formalized in [5], which we now define after some preliminary work. They are each based on distances on L(C). See [4] for basic information about metrics on the symmetric group.
Example 2.17. The most commonly used such distances on L(C) are as follows.
• the discrete metric d H , defined by • the inversion metric d K (also called the swap, bubblesort or Kendall-τ metric), where d K (ρ, σ) is the minimal number of swaps of adjacent elements required to convert ρ to σ.
Definition 2.18. A seminorm on a real vector space X is a real-valued function N satisfying the identities for all x, y ∈ X and all λ ∈ R. Note that this implies that N (0) = 0 and N (x) ≥ 0 for all x ∈ X.
Remark 2.19. Every seminorm induces a pseudometric via d(x, y) = ||x − y||. This is a metric if and only if the seminorm is a norm.
Example 2.20. Consider an n-dimensional space X with fixed basis e 1 , . . . , e n and corresponding coefficients x i for each element x ∈ X. Fix p with 1 ≤ p < ∞ and define the ℓ p -norm on X by When p = ∞ we define the ℓ ∞ norm by Definition 2.21. (votewise distances) Choose a family {N n } n≥1 of seminorms, where N n is defined on R n . Fix candidate set C and voter set V , and choose a distance d on L(C). Extend d to a function on P(C, V ) by taking n = |V | and defining for σ, π ∈ P(C, V ) d Nn (π, σ) := N n (d(π 1 , σ 1 ), . . . , d(π n , σ n )).
This yields a distance on elections having the same set of voters and candidates. We complete the definition of the extended distance (which we denote by d N ) on E by declaring it to be standard.
We use the abbreviation d p for d ℓ p , and sometimes we even use just d for d N if the meaning is clear. The distances d 1 H and d 1 K are called respectively the Hamming metric and Kemeny metric. The Hamming metric measures the number of voters whose preferences must be changed in order to convert one profile to another, and as such has an interpretation in terms of bribery. The Kemeny metric measures how many swaps of adjacent candidates are required, and is related to models of voter error. Among the many other votewise metrics, we single out d 1 S , sometimes called the Litvak distance. 2.2.2. Tournament distances. Some distances depend only on the net support for candidates.
Example 2.24. (tournament distances) Given an election E = (C, V, π), we form the pairwise majority digraph Γ(E) with nodes indexed by the candidates, where the arc from a to b has weight equal to the net support for a over b in a pairwise contest. Formally, there is an arc from a to b whose weight equals n ab − n ba , where n ab denotes the number of rankings in π in which a is above b.
Let M (E) be the weighted adjacency matrix of Γ(E) (with respect to an arbitrarily chosen fixed ordering of C). Given a seminorm N on the space of all |C| × |C| real matrices, we define the N -tournament distance by A closely related distance is defined in the analogous way, but where each element of the adjacency matrix is replaced by its sign (1, 0, or −1). We call this the N -reduced tournament distance. We denote the special cases where N is the ℓ 1 norm on matrices by d T and d RT respectively. A (reduced) tournament distance cannot be a metric, even if N is a norm, because it does not distinguish points (the mapping E → M (E) is not one-to-one). However, it is a pseudometric.
2.3.
Combining consensus and distance. In order for a rule to be definable via the DR construction, it is necessary that the first following property holds. The second property avoids trivialities and ensures some theorems in Section 3 are true. We shall assume both properties from now on. Definition 2.25. Let d be a distance on E and K a consensus. Say that (K, d) distinguishes consensus choices if whenever x ∈ K r , y ∈ K r ′ and r = r ′ , then d(x, y) > 0.
We use a distance to extend a consensus to a social rule in the natural way. The choice at a given election E consists of all s-rankings r whose consensus set K r minimizes the distance to E. We introduce the idea of a score in order to use our intuition about positional scoring rules.
Definition 2.26. (DR scores and rules)
Suppose that K is an s-consensus and d a distance on E. Fix an election E ∈ E. The We say that R is distance rationalizable (DR) with respect to (K, d).
Remark 2.27. Note that if K r is empty, then |r| = ∞. DR scores are defined so that they are nonnegative, and higher score corresponds to larger distance. This is not consistent with the usual scoring rule interpretation in Example 2.28, but the two notions of score are closely related. Our DR scores have the form M − s where s is the score associated with the scoring rule and M depends on E but not on any r ∈ L s (C).
Copeland Copeland Slater d ins trivial trivial maximin d del modal ranking plurality Young Table 1. Some known rules in the DR framework (see discussion in Section 2.4) 2.4. Some specific rules. Table 1 presents a few known rules in this framework. Most of the rules in the table are well known. We single out the following less obvious references. The modal ranking rule was investigated by Caragiannis, Procaccia and Shah [2]. The voter replacement rule (VRR) was defined essentially as a missing entry in such a table [6]. The entries marked "trivial" are so labelled because in those cases every election not in K is at distance +∞ from every K r . Missing entries reflect on the authors' knowledge, and may have established names. Our table overlaps with that in [10] -note that the (C, d 1 H ) entry is incorrect in that reference, as pointed out by Elkind, Faliszewski and Slinko [6]. Our table also overlaps one presented by Elkind, Faliszewski and Slinko [5].
Example 2.28. (scoring rules) The positional scoring rule defined by a family of weight vectors w := w (m) satisfying w 1 ≥ · · · ≥ w m , w 1 > w m elects all candidates with maximal score, where the score of a in the profile π is defined as v∈V w r(π(v),a) . The positional scoring rule defined by w has the form R Remark 2.29. Note that d w is a metric on L s (C) if and only if w 1 , . . . , w s are all distinct. The score of r under the rule defined by w is the difference nw 1 − |r|. For example, for Borda with m candidates (corresponding to w = (m − 1, m − 2, . . . , 1, 0), the maximum possible score of a candidate c is (m − 1)n, achieved only for those elections in W c . The score of c under Borda is exactly (m − 1)n − K where K is the total number of swaps of adjacent candidates needed to move c to the top of all preference orders in π(E).
Plurality (corresponding to w = (1, 0, 0, . . . , 0)) and Borda are special cases, where d w simplifies to d 1 H and d 1 S respectively. As far as the distance to W or C is concerned, d 1 S and d 1 K are proportional, but they are not proportional in general [10, p. 298-299]. Example 2.30. (Copeland's rule) Copeland's rule can be represented as R(C, d RT ). Indeed, in an election E, the Copeland score of a candidate c (the number of points it scores in pairwise contests with other candidates) equals n − 1 − s, where s is the minimum number of pairwise results that must be changed for E to change to an election that belongs to C c .
Every rule R(K, d), where K is a 1-consensus, automatically yields a social rule R s (K, d) of size s as follows.
Definition 2.31. Let 1 ≤ s ≤ m and suppose that K is a 1-consensus and d a distance on E. We define a social rule R s (K, d) of size s by choosing s elements in increasing order of score (if there are ties in the scores, we consider all possible such orderings).
Remark 2.32. R s (K, d) is single-valued if and only if the lowest s scores of candidates are distinct. Note that if the s-consensus K ′ is a restriction of the 1-consensus K, it is not necessarily the case that R(K ′ , d) = R s (K, d). For example, S is a restriction of W, and R(W, d 1 K ) is the social choice rule, Borda's rule. By above, we can also define the social welfare version of Borda's rule. However, R(S, d 1 K ) is Kemeny's rule. The social choice rule obtained by taking the top element of the ranking given by Kemeny's rule is also sometimes called Kemeny's rule. All four rules mentioned here are different.
Existence and uniqueness
The DR framework is not very restrictive without further assumptions on K and d, as shown by Campbell and Nitzan [1].
3.1. Existence. We give necessary and sufficient conditions, an improvement on [1, Prop.
4.4] and [5, Thm 2].
Definition 3.1. For each rule R, there is a unique maximum consensus K max (R), namely that whose consensus set D max consists of all elections on which R gives a unique output, which we define as the consensus choice.
Remark 3.2. Most rules commonly used in practice have ties, so that the domain of K max (R) is smaller than the domain of R. For example, if a social choice rule satisfies anonymity (symmetry with respect to voters) and neutrality (symmetry with respect to candidates) and is faced with a profile containing exactly one of each possible preference order, it must select all candidates as winners.
The image of the rule is the set of all r ∈ L s (C) which occur as a winner in some election. That is, there exists E ∈ E such that r ∈ R(E). The rule satisfies nonimposition if every r ∈ L s (C) occurs as a unique winner somewhere -in other words, the unique image of R equals L s (C).
Remark 3.4. Although slightly confusing (the image should perhaps be a set of subsets rather than their union) this is the standard terminology for set-valued mappings in mathematics.
We need to rule out the possibility of an election being at infinite distance from all consensus elections. There is no problem with an election being equidistant from all nonempty consensus sets, but without this assumption, empty consensus sets will (by convention) also be at the same distance.
Proposition 3.6. Let K be a consensus and R a rule. There exists a nontrivial distance rationalization R = R(K, d) if and only if the following two conditions hold: (i) R extends K; (ii) the image of R equals the image of K. Furthermore, d can be chosen to be a metric.
Proof. The first condition is necessary: because of the assumption that K distinguishes con- The second condition is necessary: the image of K is contained in the image of R because R extends K, and by the nontriviality assumption, if K r = ∅ then r is not a winner at any election. Now assume that the two conditions hold. Let d denote either of the first two distances in Example 2.16 and let S = R(K, d). We claim that S = R (note that rationalization is nontrivial because the distances are finite). Let E ∈ E. Since R extends K the result is immediate if E ∈ K r for some r, because then R(E) = {r} and S(E) = {r} since d distinguishes consensus choices. Now suppose that E is not a member of K r for any r. Note that d(E, K r ) ≥ 2 if r ∈ R(E). For each r ∈ R(E), by assumption r is in the image of K, so there is F ∈ K r with d(E, F ) = 1 (for the first and third distances, or the second if |R(E)| > 1) or F ∈ K r with d(E, F ) = 0 (for the second distance, if |R(E)| = 1). Thus S(E) is precisely the set of r for which r ∈ R(E), in other words S(E) = R(E). Proof. Let K = K max (R). The image of K is precisely the unique image of R, and R clearly extends K, so this follows directly from Proposition 3.6.
Remark 3.8. In Proposition 3.6, if we assume that R and K satisfy nonimposition, then condition (ii) is satisfied. In this case R is distance rationalizable if and only if it extends K [1,Prop. 4.4]. In this case, the third distance from Example 2.16 can be used. Note that in general the third distance does not work -consider a rule for which only one consensus set K a is nonempty, yet the rule returns a disjoint two-element set {b, c} at some election E. The third distance would then yield {a, b, c} at E, a contradiction.
However the assumption of nonimposition is not necessary -consider the rule in which R(E) = {r} for every election E, and choose d to be the discrete metric.
Thus if K is specified, the question of existence is settled. For example, every social welfare rule satisfying the usual unanimity axiom (if every voter has the same preference order, the rule outputs precisely this common ranking) can be rationalized with respect to S.
In view of the flexibility of the DR framework, it is clear that the key idea is to make an appropriate choice of a "small" K and "natural" d so as to recapture rule R via R = R(K, d).
3.2.
Uniqueness. We now turn to the question of uniqueness. The construction in the proof of Proposition 3.6 shows that changing both K and d can lead to the same rule. When K is fixed and d varies, the rule often changes. However it sometimes does not change, as can be seen from Similarly, when d is fixed and K varies, the rule sometimes does not change. For example, consider Copeland's rule, which can be described as R(C, d RT ). It can also be described as The standard examples such as Borda's, Kemeny's, and Copeland's rules all behave well when we extend the consensus set beyond the one used to define them. This leads us to the following question: if R has the form R(K, d), and K ′ is a consensus that extends K, is it necessarily the case that R = R(K ′ , d)? In particular, does R = R(K max (R), d)? The answer is no in general, as we now show.
Quotients
Symmetries of voting rules occur very often in practice. In this section, we show how to express distance rationalization using only symmetric objects and functions. We start with general equivalence relations, then equivalence relations induced by actions of symmetry groups, and then consider special cases of such actions. In Sections 4.5, 5 and 4.4, we apply the general results to anonymity and homogeneity, neutrality and reversal symmetry.
We use a general equivalence relation ∼ on E, which we shall specialize in later sections. All our definitions in this section are understood to be with respect to ∼. For example, we may refer to "compatibility" and "total compatibility" without mentioning ∼ directly.
Let E denote the equivalence class of E, and let Q denote the set of equivalence classes. The usual quotient map E → E takes E onto Q.
Remark 4.2. In usual mathematical terms, R is compatible with ∼ if and only if it is an invariant for ∼ or a morphism for ∼. Definition 4.3. If R is compatible then we may define a mapping R on Q via R(E) = R(E) for every E ∈ E (it is well-defined precisely because of compatibility of R). We call R a partial social rule on Q.
Remark 4.4. We shall apply this construction later, where ∼ is the relation defining anonymity or homogeneity, in which case everything makes sense because the projection to the quotient space does not change the candidate sets. However, if the projection does affect the candidate sets (as with neutrality) the result may look strange and the interpretation rather uninteresting, although the theorems will be correct. For example, a rule compatible with the equivalence relation defining neutrality must be the constant rule which chooses the same r at every election, or the rule that chooses all possible r at every election. We discuss this more in Section 4.4.
4.1.
Totally compatible distances. We want to define DR rules on Q.
Definition 4.8. Let δ be a distance on Q and K a consensus on Q. The rule R(K, d) is defined using the analogue of (1). Proposition 4.9. The following conditions are equivalent for a social rule R on E.
(i) R is compatible and distance rationalizable.
Proof. Suppose that the first condition holds. We use the consensus K := K max (R) from Definition 3.1 (which is compatible because R is compatible). We can recapture R as R(K, d) by defining d to be the second distance in Example 2.16. Let R ′ = R(K, d). Then if E ∈ D(K), necessarily R ′ (E) = R(E). If E ∈ D(K) then R ′ (E) is precisely the set of r for which r ∈ R(E), namely R(E). Thus R ′ = R. It remains only to check that d is totally compatible. Since d is defined in terms only of the images R(E) and R is compatible, this follows immediately. Suppose that the second condition holds. Define δ(E, E ′ ) = d(E, E ′ ) (this is well-defined since d is totally compatible. Define K = K. Then R = R(K, δ).
Finally, suppose that the third condition holds. Define K, d by composing K, δ with the projection to Q. Then R = R(K, d) and R is compatible since R(E) = R(E).
The distance and consensus used in the proof of Proposition 4.9 are rather unnatural (note that the first and third distances in Example 2.16 would not even work, being metrics). We now consider more natural constructions that relate to the original distance and use the equivalence relation explicitly.
Quotient distances.
When dealing with equivalence classes, the obvious idea is to use a quotient distance [3]. This concept is relatively little-known. Definition 4.10. We define d : Q × Q → R + to be the quotient distance induced by ∼.
Remark 4.11. The standard construction of quotient distance d is as follows: where the infimum is taken over all admissible paths, namely paths such that k , E projects to x and E ′ to y. We now focus on a special situation where d has a much simpler formula.
). Non-simple distances do arise in our framework. Proof. We have The first equality holds by definition of distance to a set, the second because d is simple, the third by definition ofd, the fourth by compatibility of K and the fifth for the same reason as the first. Now let S = R(K, d). Then S(E) = R(E) by above, for all E. If R := R(K, d) is compatible then R exists and R(E) = R(E) for all E. Thus S = R. Remark 4.19. The condition that R(K, d) is compatible is not always satisfied, as we see when studying homogeneity in Section 5. In Section 4.3 we give sufficient conditions for it to be satisfied automatically.
Symmetry groups.
Equivalence is a form of symmetry between elections. Proposition 4.17 is clearly useful, but the only simple distances we have seen so far are totally anonymous ones, for which the result is obvious. We introduce a strengthening of equivalence that will yield simple distances. We apply it in later subsections to discuss anonymity, neutrality, reversal symmetry and homogeneity.
We recall some basics of the theory of group actions on sets. Let X be a set and G a subgroup of the group of all permutations of X. The orbit of x ∈ X under G is the set of all g(x) as g ranges over G.
Definition 4.20. Let ∼ be an equivalence relation on E and let G be a group acting on E via morphisms. In other words, for each g ∈ G, E ∼ E ′ implies g(E) ∼ g(E ′ ). If the equivalence classes of ∼ are precisely the orbits under the action of G, then we say ∼ is induced by G.
The distance d is G-equivariant if G acts via isometries: Proposition 4.21. Suppose that G is a group that induces ∼ via an action on E. The following conditions are equivalent for a social rule R.
(i) R is G-invariant and distance rationalizable.
Proof. The first, second and fourth parts are equivalent by Proposition 4.9. The second implies the third by definition. Suppose that the third condition holds. It remains to show that R is G-invariant. Fix arbitrary r and g ∈ G. Since K is G-invariant, g(K r ) = K r . Then d(E, K r ) = d(g(E), g(K r )) = d(g(E), K r ). Thus R(E) = R(g(E)), yielding the second condition.
However, the proof of Proposition 4.21 does not give a relationship between the distances used in parts (ii) and (iii). We now proceed to clarify this. We first give an important sufficient condition for a distance to be simple. Proof. We show that for each x, y ∈ Q, the minimum value of k for paths achieving the minimum in (2) is always 1. Assume that this is not the case, so there exist x, y, a minimum k > 1 and admissible paths such that Choose g, h ∈ G so that g(E k ) = E ′ k−1 (possible since ∼ is induced by G). Then by G-equivariance and the triangle inequality ). This contradicts the minimality of k, and this contradiction shows that d is simple. The other formulae ford follow immediately, because all E ′ projecting to y are equivalent, so that each can map onto any other via some g.
Proof. R is G-invariant by Proposition 4.21, so R exists. By Proposition 4.22, d is simple. The result follows from Proposition 4.17.
4.4.
Neutrality and reversal symmetry. Proposition 4.21 does not apply to the study of these properties, because rather than R(g(E)) = R(E), we want R(g(E)) = g(R(E)): the output of the rule changes in a consistent way. This is because the group action changes the candidates, unlike the case with anonymity.
A social rule of size s is a mapping taking each election to a set of s-rankings. If a group G acts on the set of rankings, then there is a natural induced action on social rules: g(R) is the rule for which g(R)(E) = g(R(E)). If the group acts on s-rankings for all s, we can say more.
Definition 4.25. Suppose that G is a group acting on L(C * ) such that it maps L s (C * ) to itself for every s. The partial social rule R is G-equivariant if the identity R(g(E)) = g(R(E)) holds.
An example of this is reversal symmetry. A social welfare rule satisfies reversal symmetry if turning all input rankings upside down also reverses the output. The group in question is the group of order 2, generated by g say. All our examples so far of distances on rankings have satisfied reversal symmetry.
Remark 4.26. Reversal symmetry has been defined for social choice rules as: if a is the unique winner in the original profile, then a is not a winner in the reversed profile. For example, the social welfare version of Borda's rule satisfies this. However for social choice rules which do not come from social welfare rules, this is inconsistent with our definition above. In the case of DR rules, there is no difficulty, because each social choice rule yields a social welfare rule as in Definition 2.31.
It is straightforward to prove an analogue of Proposition 4.21 that extends to the case of G-equivariant rules. We omit the details. For example, since S, d 1 H and d 1 K each satisfy reversal symmetry, so do the modal ranking rule and Kemeny's rule.
If a group G acts on C * then there is a natural induced action on s-rankings, whereby the ranking a 1 · · · a s maps to g(a 1 ) · · · g(a s ). An example of this involves neutrality. In this case G the group of all permutations of the candidates. Neutrality is a very natural condition for consensuses and for distances, and is satisfied by all our main examples. It means that the identities of candidates are not relevant because each candidate is treated symmetrically. Proposition 4.28. Let K be a neutral consensus and d a neutral distance. Then R(K, d) is neutral. Conversely, if R is neutral and distance rationalizable then R = R(K, d) where K and d are neutral.
4.5.
Anonymity. We now apply Proposition 4.21 directly. First we discuss the concept of anonymity. Several authors use a fixed finite voter set and define a rule to be anonymous if the rule is invariant under permutations of the set. This deals with the order of voters, but not their identities. On the other hand, allowing arbitrary identities leads us to issues of classes that are not sets, category theory, etc. Our convention that there is a single countably infinite set of voters allows us to deal both with the order and identity of voters.
We start with an example that explains the need to distinguish G-equivariance from total compatibility. We can use the results of Section 4.3 directly.
Definition 4.30. Let G be the group of all bijections of V * . For any set X, we define an action of G on functions in the usual way by g ·f (v) = f (g(v)). In particular for each C we can apply this to X = L(C). This allows us to define an action on E via g · (C, V, π) := (C, g(V ), g · π). Let ∼ be the equivalence relation induced by this action. A partial rule is anonymous if it is compatible with ∼. A distance is anonymous if it is G-equivariant, and totally anonymous if it is totally compatible with ∼.
We denote Q by V and call it the set of anonymous profiles or voting situations. Example 4.34. For an anonymous votewise distance, we can let S = {s n 1 1 , . . . , s n k k } denote the multiset of weight n corresponding to the n-tuple (a 1 , . . . , a n ) ∈ R n . We can define N (S) = N n (a), where n = i n i s i can be computed knowing only S.
For example, consider the ℓ p norm for 1 ≤ p ≤ ∞ defined on R n . This yields an anonymous votewise distance when coupled with any underlying distance d on L(C). For each x, y ∈ V, the distance d(x, y) is the minimum number of voters whose votes must be changed in order to transform x into y. For example if x ∈ V has 2 abc voters and 3 bac voters, while y has 2 bac voters and 3 cba voters, then d(x, y) = 3. Note that for the Kemeny metric d := d 1 K , d(x, y) = 8. 4.5.2. Anonymous DR rules. Because of the special form of the equivalence relation for anonymity (it does not touch the candidate sets), a partial social rule on V has a nice form. Indeed, many authors define voting rules directly on V. We simply define an anonymous rule in the DR framework by choosing a consensus notion K on V and a distance δ on V, and using the analogue of (1). Because V can be described as multisets which correspond geometrically to histograms, this may allow us to create interesting anonymous rules using geometric intuition.
The next characterization, which follows directly from Propositions 4.21 and 4.23, answers positively a question raised in [5, p. 362 Conversely if R is anonymous and distance rationalizable, then R = R(K ′ , d ′ ) where K is anonymous and d ′ is totally anonymous.
This applies to all consensuses described so far, and to all votewise distances based on symmetric seminorms, in addition to tournament distances. Thus all rules in Table 1 are anonymous.
Note that elements of V can be encoded by multisets which are essentially histograms. A standard measure of distance between histograms is the Earth Mover or transportation distance. The interpretation in our situation, when d is anonymous and standard, is that we must move voter mass between types of voters while incurring the minimum cost (distance). In fact in this cased is exactly the Earth Mover distance based on d. Computing it is a special case of the linear assignment problem of operations research. The minimum can be computed in polynomial time via the "Hungarian method" [11]. An equivalent formulation of the problem is to find a minimum weight matching in a bipartite graph.
Homogeneity
In this case we use a slightly different equivalence relation.
Definition 5.1. Let E = (C, V, π) be an election, where n = |V |. The vote distribution associated to E is the probability distribution on L(C) induced by the multiset N (E), which we denote D(E). The vote distribution map defines an equivalence relation ∼ on E in the usual way. We denote the quotient space by P.
Definition 5.2.
A rule is homogeneous if and only if it is compatible with ∼. A distance is called totally homogeneous if it is totally compatible with ∼. Remark 5.3. In other words, for a homogeneous rule, the set of winners depends only on the probability distribution of voter types -cloning each voter the same number of times makes no difference to the result. Our definition of homogeneity implies anonymity, because the equivalence relation used in this section refines the one used for anonymity. Some authors do not make it clear whether they consider homogeneous rules to be anonymous, because they give a definition in terms of profiles in which the cloned voters occupy a particular position. Of course, the two definitions are the same in the presence of anonymity.
It is important to note that ∼ is not induced by a group action. Rather, there is a monoid (a "group without inverses") acting. On V there is an action of the positive integers under multiplication where for each x ∈ V, k · x is the voting situation formed by adding k − 1 copies of each voter. A rule is homogeneous if it is anonymous and invariant under the action of this monoid. Thus, for example, starting with an election E and doubling or tripling the number of voters will lead to equivalent elections 2E, 3E, but there is not necessarily any way to get from 3E to 2E via an element of the monoid, because of the lack of inverses. This has important consequences, as we now see.
The above remark shows that Proposition 4.21 does not necessarily apply to homogeneity. In fact the conclusion is known to be false.
Example 5.4. Consider K = C and d = d 1 K . The rule R(K, d) is known as Dodgson's rule and is known not to be homogeneous, although it is anonymous. For example, consider the following example of Fishburn [8] with C = {a 1 , . . . , a 7 , x}. We start with a 1 . . . a 7 , and consider all its 7 cyclic permutations. We then insert x between the 4th and 5th entries in each case, so x is always in 5th position. Then d 1 K (E, C x ) = 7 (x must switch past each a i exactly once) but d 1 K (E, C a i ) = 6 for each i, because, for example, x must switch past a 7 , a 6 , a 5 respectively 3, 2, 1 times.
However, let k ≥ 1 and consider the election kE. Then k −1 d 1 K (kE, C x ) → 3.5 as k → ∞ (because we need only just over 1/2 a switch per a i ), while k −1 d 1 K (kE, C a i ) → 4.5 (because, for example, a 1 must switch past a 7 , a 6 , a 5 respectively just over 2.5, 1.5, 0.5 times).
In the analogue of the proof of Proposition 4.21, we can conclude only that kE minimizes the distance to elements of D(K) of the form kE ′ , but not to all of D(K).
Remark 5.5. The same example shows that R(C, d 1 H ), the Voter Replacement Rule, is not homogeneous. In this case the limiting distances to C x and C a i are 1.75 and 1.5, while the distances for the original E are both equal to 2.
In order to prove a result similar to Proposition 4.21, we need a strong condition on K. We call an anonymous consensus divisible if every element of K r with kn voters has the form kE where E has n voters. This is a very strong condition -taking n = 1 shows that K is extended by S (up to possible permutation of the winners).
We now generalize [5,Thm 8], which dealt with the case where K = S and d is ℓ p -votewise, based on an underlying pseudometric.
Definition 5.6. An anonymous distance on E is homogeneous if for each k ≥ 1 and each A family of symmetric seminorms N is homogeneous if N nk (x (k) ) = N n (x) for all x ∈ R n and all k ≥ 1. Here x (k) denotes the element of R nk obtained by concatenating k copies of x.
Remark 5.7. The reader should avoid confusion by noting that the term homogeneous is often used for the different property of a seminorm expressed by the identity N (λx) = |λ|N (x).
Remark 5.8. Let d be a standard distance and N a symmetric seminorm. Then d N can be normalized to be homogeneous. Explicitly, let d N * (E, E ′ ) = n −1 d N (E, E ′ ) where E = (C, V, π) and |V | = n. The DR rules defined by d N and d N * are the same, since we are only scaling the distance by a constant factor. Proposition 5.9. Let K be a homogeneous divisible consensus and d a homogeneous distance. Then R(K, d) is homogeneous.
Proof. The proof of Proposition 4.21 adapts directly to this case, as described above.
Thus we recapture the well-known fact that Kemeny's rule R(S, d 1 K ) is homogeneous. Proposition 5.9 shows, for example, that although Dodgson's rule can be rationalized with respect to S and some distance (since Dodgson's rule satisfies the unanimity axiom), no such distance can be homogeneous.
The Votewise Minimizer Property
The failure of R(K, d) to inherit various conditions from K is related to the fact that minimization does not respect various operations. Roughly speaking, votewise distances combine better with votewise consensuses. We now make some technical (and rather strong) definitions that allow for several positive results when dealing with votewise distances. Definition 6.1. Let K be a compatible consensus and d a compatible distance. Say that (K, d) has the compatible minimizer property (CMP) if for each E, E ′ ∈ E with E ∼ E ′ and each r, d(E, K r ) = d(E ′ , K r ).
Remark 6.2. If ∼ is induced by a group action then the CMP is automatically satisfied, as used in the proof of Proposition 4.21. The analogue of Proposition 4.21 does not hold for general equivalence relations, as we see in Example 5.4. However, with the additional assumption of the CMP, everything works well. Proposition 6.3. Let K be a compatible consensus and d a compatible distance, and suppose that (K, d) satisfies the CMP. Then R(K, d) is compatible.
Proof. Let E, E ′ ∈ E with E ∼ E ′ . By CMP, d(E, K r ) = d(E ′ , K r ) for all r and in particular the minimizing values of r are the same. Example 6.4. If d is totally compatible then the CMP is automatically satisfied. Thus, for example, every rule R(K, d), where d is a tournament distance and K is anonymous and homogeneous, is anonymous and homogeneous.
Example 6.5. Consider the election E = (C, V, π) where C = {a, b}, V has size 5, and π = {ab, ab, ba, ba, ba}. Then d(E, C a ) = 1 for d ∈ {d H , d K }, and every minimizer differs from π only in that precisely one of the ba voters switches to ab. However, if we consider 3E then each minimizer requires not 3, but 2 switches. Thus (C, d) does not satisfy the CMP with respect to the equivalence relation used to define homogeneity. This also shows that (M, d) need not satisfy the CMP, because M coincides with C when m = 2.
Thus we should not necessarily expect Dodgson's rule or the Voter Replacement Rule to be homogeneous, and indeed they are not, as Example 5.4 shows. Definition 6.6. Suppose that d is votewise and anonymous, and K is anonymous. Say that (K, d) satisfies the votewise minimizer property (VMP) if the following condition is satisfied.
For each r ∈ L s (C) and each election E = (C, V, π) ∈ E, there exists a minimizer (C, V, π * ) ∈ K r of the distance from E to K r , so that for all i, d(π i , π * i ) depends only on π i and r. Proposition 6.7. If (K, d) satisfies the VMP, then • d(π i , π * i ) has the form δ(t, ρ) for some function δ, where t, ρ ∈ L(C); • d(E, K r ) has the form N (S) where S is the multiset of all values of δ(t, ρ) counted with multiplicity.
Proof. This follows directly from the definitions.
Example 6.8. Let K = W and d = d K , and N = ℓ 2 . For each E = (C, V, π) ∈ E and a ∈ C, d(E, W a ) = N (d(π 1 , π * 1 ), . . . , d(π n , π * n )). We can take π * to be the ranking derived from π by swapping a to the top. Thus d(E, W a ) 2 = t∈L(C) n(t)r(t, a) 2 , where n(t) is the number of times t occurs in π. Proposition 6.9. Let K be an anonymous consensus and d a votewise anonymous and homogeneous distance. If (K, d) satisfies the VMP, then it satisfies the CMP with respect to the equivalence relation defining homogeneity.
Proof. For each E and r, there is a minimizer of d(E, K r ) for which the distance has the form N (S) where S is the multiset of values of d(π i , π ′ i ) occurring. Thus it depends only on the equivalence class with respect to anonymity. By homogeneity of d, it in fact only depends on the equivalence class of with respect to homogeneity. Example 6.5 shows that the VMP is not always satisfied, and Proposition 6.10 gives sufficient conditions for it to be satisfied. Proposition 6.10. Let d be an anonymous votewise distance on E. Suppose that the sconsensus K satisfies the following: for each r ∈ L s (C), there is a nonempty subset S r of L(C) such that K r consists precisely of elections for which no voter has a ranking in S r . Then (K, d) satisfies the VMP.
Proof. The minimizer in question is obtained by, for each i, choosing the closest element of L(C) under the underlying distance.
Example 6.11. (S s , d) satisfies the VMP for each s, because we can take S r to be the set of rankings which do not agree with r in all of their top s places. Any consensus which S s extends also satisfies VMP. For example, we can choose one fixed ranking that does agree with r in the top s places, and define S r to be its complement. Note that this example is not neutral.
6.1. Homogeneity. So far we can only show homogeneity when using S. We want to widen this to at least W. We use a definition from Elkind, Faliszewski and Slinko [5].
Definition 6.12. We call a seminorm N monotone in the positive orthant if whenever 0 ≤ x i ≤ y i for all i, N (x) ≤ N (y). Proposition 6.13. Suppose that K is homogeneous, d N is votewise, anonymous and homogeneous, (K, d N ) satisfies the VMP, and N is monotone in the positive orthant. Then R(K, d N ) is homogeneous.
Proof. Let E ∈ V and k ≥ 1. First note that d(E, K r ) = d(kE, kK r ) ≥ d(kE, K r ). We now prove the converse inequality.
By VMP, d(E, K r ) = N (S) where S is the multiset of values d(π, π * ). Also by VMP and homogeneity, d(kE, K r ) = N (kS ′ ) where S ′ is the multiset of values d(π, π * * ) (here the minimizer may depend on k, so π * * may not equal π * ). Note that S ′ is elementwise at least as great as S, because π * is a minimizer. By homogeneity and monotonicity in the positive orthant, d(kE, Proof. Let E, E ′ ∈ E such that R(E) ∩ R(E ′ ) = ∅. We show that for all r there are minimizers m(E, r), m(E ′ , r) and m(E + E ′ , r) such that m(E, r) + m(E ′ , r) = m(E + E ′ , r). The result then follows just as in Proposition 6.3. The claim follows easily from the VMP. Because minimization of the distance to r occurs votewise, it respects the split into E and E ′ . Corollary 6.19. If 1 ≤ p ≤ ∞, then R(S s , d p ) is consistent.
Recall that Kemeny's rule is consistent when properly considered as a social welfare rule, but not when considered as a social choice rule (this point may potentially confuse readers of [5]). 6.3. Continuity. After fixing an arbitrary ordering on L(C), each partial social rule on ∆ Q (L(C)) of size 1 can be identified with an arbitrary function on an arbitrary nonzero subset of the rational points of the 6-simplex ∆ 6 , with image contained in C. Rules defined in this level of generality are not easy to deal with. Young [13] introduced the axiom of continuity.
If R is homogeneous then it is continuous if and only if every vote distribution sufficiently close to E in the ℓ 1 -norm on ∆(L(C)) yields the same output as E. We do not know of any voting rule seriously considered in the literature that is not continuous.
We now give a slight generalization of a result of Elkind, Faliszewski and Slinko [5, Thm 6]. Proposition 6.21. Suppose that K is continuous and homogeneous, d is votewise with respect to a continuous homogeneous seminorm and (K, d) satisfies the VMP. Then R(K, d) is continuous.
Conclusions and future work
We have clarified the relationship between distance rationalizability and axiomatic properties of social rules, and given improved necessary and sufficient conditions for rules to satisfy several of these axioms. The results show clearly that votewise distances combine better with votewise consensuses (which we define as those satisfying the VMP). The more complicated structure of consensuses such as C s compared to S s is reflected in the failure of various properties to extend. Of course, VMP is a very strong property, and we do not know of consensuses other than S s that satisfy it generally. However VMP and CMP may be satisfied a particular (K, d) pair in a given application. What seems clear is that votewise distances work best with "votewise consensuses", and Condorcet consensus with tournament distances. Mixing the two yields rules such as Dodgson's and the Voter Replacement Rule which fail to satisfy basic properties such as homogeneity.
We have only a few sufficient conditions for homogeneity of a DR rule. If the rule is not homogeneous, a homogeneous rule similar to the original may be found. In [8] a way around the nonhomogeneity of Dodgson's and Young's rules was found, by using a limiting process to redefine the distance. This is unsatisfactory -it is not even clear that the limit exists. Presumably using the construction R(K, d) may work, but it is not completely clear to us.
Systematic exploration of the space of rules R(S s , d p ) where d is a neutral distance on rankings, may well unearth new rules with desirable properties. These rules are already known to be continuous, neutral, anonymous, homogeneous and consistent. Other possibly desirable properties may also be satisfied: Kemeny's rule, which falls into this class, also satisfies a Condorcet property for social welfare rules [14] while scoring rules are also monotonic as social choice rules.
To our knowledge, ℓ p votewise distances with 1 < p < ∞ have not been studied systematically. Also, in addition to the discrete, inversion and Spearman metrics on rankings discussed here, there are many interesting distances on rankings yet to be explored. Besides votewise distances, there are many other interesting distances on V and P, which may yield useful new social rules. Such distances on multisets and statistical distances are heavily used in many application areas [3].
In a related forthcoming work, the present authors use the framework of distance rationalizability of anonymous and homogeneous rules to study the decisiveness of such rules. We expect other applications, for example by using different groups of symmetries.
|
2016-10-06T15:04:24.000Z
|
2016-10-06T00:00:00.000
|
{
"year": 2016,
"sha1": "2ff72a826fbe2aa274fa92e3bcb66a57164c0ef2",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "50a13df21fc06248abfcec0cda372372a345b046",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics",
"Computer Science"
]
}
|
12870838
|
pes2o/s2orc
|
v3-fos-license
|
Localization transition on complex networks via spectral statistics
The spectral statistics of complex networks are numerically studied. The features of the Anderson metal-insulator transition are found to be similar for a wide range of different networks. A metal-insulator transition as a function of the disorder can be observed for different classes of complex networks for which the average connectivity is small. The critical index of the transition corresponds to the mean field expectation. When the connectivity is higher, the amount of disorder needed to reach a certain degree of localization is proportional to the average connectivity, though a precise transition cannot be identified. The absence of a clear transition at high connectivity is probably due to the very compact structure of the highly connected networks, resulting in a small diameter even for a large number of sites.
The spectral statistics of complex networks are numerically studied. The features of the Anderson metal-insulator transition are found to be similar for a wide range of different networks. A metalinsulator transition as a function of the disorder can be observed for different classes of complex networks for which the average connectivity is small. The critical index of the transition corresponds to the mean field expectation. When the connectivity is higher, the amount of disorder needed to reach a certain degree of localization is proportional to the average connectivity, though a precise transition cannot be identified. The absence of a clear transition at high connectivity is probably due to the very compact structure of the highly connected networks, resulting in a small diameter even for a large number of sites. The Anderson transition predicts a transition from extended (metallic) to localized (insulating) eigenstates as a function of the disorder or energy of a quantum system [1]. This second-order phase transition turns out to be a very general property related to the transport of quantum and classical waves in disordered systems and signatures of it have been observed for electrons in metals, microwaves in waveguides, light in liquids and gels and acoustical waves in the earth crust [2]. For all these systems the clearest signature of the transition is the different spread of a signal injected into the system. While in a metallic phase the injected wave will spread all over the system, in the insulating phase it will be localized in the vicinity of the injection point. This effect is a result of the constructive interference between time-reversed path throughout the system. Thus, the details of the transition are strongly influenced by the dimensionality and topology of the system.
The lower critical dimension, below which the system is localized for all values of disorder, is believed to be two [3], since the probability of returning to the origin (i.e., constructive interference due to time reversal symmetry) is finite below d = 2. The upper critical dimension [4] (beyond which the critical exponents reach their meanfield values) remains uncertain although it is argued to be infinity [5]. The parameters defining the transition are traditionally given as the critical disorder W c expressed in terms of the width of the distribution from which the on-sites energies in the Anderson model are drawn, and the critical exponent ν (for definitions see Sec. II). For square lattices with dimensionality d = 3, 4 the values of W c and ν are well established : for d = 3, W c ∼ 16.5 and ν ∼ 1.5 [6,7], while for d = 4, W c ∼ 35 and ν ∼ 1 [8,9]. For higher dimensions the following extrapolation was offered [9]: ν ∼ 0.8/(d − 2) + 0.5 and W c ∼ 16.5(d − 2), which was obtained by studying the transition on bifractal topologies. Thus, the mean field critical index value of 1/2 is obtained in the the upper critical dimension d = ∞.
There has been recently much interest in the properties of random scale-free networks [10,11,12,13,14,15,16,17,18,19]. These networks are characterized by the fact that each node is connected with some finite probability to any other node in the graph, which is very different from the usual topology of a real space lattice in which nodes are connected only to their neighbors. This leads to a very interesting behavior of the graphs when properties such as percolation, cluster structures, paths length etc. are considered [11]. Interest in the influence this unusual topology has on the properties of wave interference (i.e., the Anderson transition of these graphs) is rising. Indeed, recently the Anderson transition in particular networks, namely the small-world networks [20,21] and the Cayley tree [22] were studied. Older work on the localization properties of sparse random matrices [23] is directly relevant to Erdös-Rényi graphs. Since scale free networks have an unusual topology for which anomalous classical properties have been found [24,25] it is of particular interest to study the Anderson localization in these networks.
Essentially, the probability to return to the origin defines the dimensionality of a system for the Anderson transition. Therefore, one may speculate that random graphs, which have only very long closed trajectories, correspond to systems with an infinite dimension [26]. On the other hand, the critical disorder which depends roughly on the number of nearest neighbors Z is expected to follow [1] W c ∼ Z , which for a random graph with an average degree k (i.e., the average number of connections per node) corresponds to W c ∼ k . Thus, one may expect here an interesting situation in which the critical index ν, which is determined by the dimensionality is close to a half, while the critical disorder is determined by k . This is very different than the situation described by the extrapolation given above, where the critical disorder for an infinite dimension should also be infinite.
Beyond the general interest in investigating the Anderson transition on scale free networks, and the fresh outlook it might provide on the localization phenomenon, the metal-insulator transition can provide insights into the functionality of complex networks. Consider for example an optical communication network. In such a net-work the edges of the graph represent optical fibers in which light propagates and the nodes represents a beam splitter which redistributes the incoming wave into the outgoing bonds connected to the node. Since for high quality optical fiber there are essentially no losses or decoherence on the bonds, the amplitude of the transmitted wave will not depend on the bond length (on the other hand phase will depend on the length). This network may be mapped on a tight binding Hamiltonian of the type described in Sec. II [27]. An interesting question for such a scale free network is whether a wave injected into one of the nodes will produce a signal at all other node (which in the language of the Anderson localization is equivalent to the question is the system metallic) or only at a finite set of other nodes (i.e., the system is insulating). A metal-insulator transition in this network corresponds to a phase transition between those two phases as function of the properties of the nodes. Generally, for any complex network in which information propagates in a wave-like fashion and interference is possible, the Anderson transition will limit the spread of the information throughout the network.
The paper is organized in the following way: In the next section (Sec. I) we describe the different networks which were considered, while in Sec II the spectral statistics method which was applied in order to identify the metal-insulator transition is outlined. In Sec. III the results are depicted and some general characteristics of the localization on complex networks are discussed.
I. CHARACTERISTICS OF THE DIFFERENT NETWORKS
Our main goal is to study the Anderson transition for different complex networks. In this section we shall define the characteristics of the networks which will be considered.
A. Random Graph
A random graph (or-random regular graph) is a graph with N nodes, each is connected to exactly k random neighbors [11]. The diameter of a graph is the maximal distance between any pair of its nodes. In a random graph the diameter d is proportional to ln N . In Sec. III we shall present results of the level spacing distribution for random-regular graphs with k = 3 .
B. Erdös-Rényi Graphs
In their classical model from 1959 Erdös and Rényi (ER) [28] describe a graph with N nodes where every pair of nodes is connected with probability p resulting in k = N p. For a large random graph the degree distribution follows the Poisson distribution: The diameter of such a graph follows: d ∼ ln N , similar to a random graph. In Sec. III we have specifically calculated the level distribution for k = 3, 3.1, 3.2, 3.5, 4, 5, 7.5 and 10.
C. Scale-Free Networks
Scale free (SF) networks [10] are networks where the degree distribution (i.e., fraction of sites with k connections) decays as a power-law. The degree distribution is given by [29] : [14], λ is the power-law exponent, m is a lower cutoff, and K is an upper cutoff . Thus, there are no sites with degree below m or above K. The diameter of the SF network can be regarded as the mean distance of the sites from the site with the highest degree. For graphs with 2 < λ < 3 the distance behaves as d ∼ ln(ln(N )) [15], and for λ = 3 as d ∼ ln(N )/ ln(ln(N )) [16]. This anomalous behavior stems from the structure of the network where a small core containing most of the high degree sites has a very small diameter. For higher values of λ the distance behaves as in ER, i.e., d ∼ ln(N ). The k of a SF graph is obtained by the following expression: For λ > 2 and large enough N the average degree , k , is a constant. The SF networks analyzed in this paper correspond to λ = 3.5, 4, 5 with m = 2 (lower cutoff), and λ = 4, 5 with m = 3. Due to their small diameter, SF networks with λ < 3 were not considered.
D. Double peaked distributions
In order to find hierarchical relation between the different graphs, we studied also some variations on these graphs. For a random graph we changed the degree of a small percent of the nodes, so we have a graph with double peaked distribution. Thus, the average connectivity, k , is the average degree of the nodes. Several examples were taken: changing 5% of the nodes to k = 5 (instead of k = 3) resulting in k = 3.1, or changing 5% of the nodes to k = 10 ( k = 3.35). Replacing 20% of the nodes connectivity for the previous cases will result in k = 3.4 (for k = 5 nodes) and k = 4.4 (for k = 10 nodes). Additionally, in order to relate with previous results of the metal insulator transition on a Cayley-tree [22] we checked a tree in which 5% of its nodes have higher degree (k = 4) resulting in an average connectivity 3.05 and creating few closed trajectories -loops.
II. METHOD
Now we turn to the calculation of the spectral statistics of these networks. First, one must construct the appropriate network structure, i.e., to determine which node is connected to which. This is achieved using the following algorithm [14,17]: 1. For each site choose a degree from the required distribution. 2. Create a list in which each site is repeated as many times as its degree. 3. Choose randomly two sites from the list and connect this pair of site as long as they are different sites. 4. Remove the pair from the list. Return to 3.
The diameter of a graph is calculated by building shells of sites [29]. The inner shell contains the node with the highest degree, the next contains all of its neighbors, and so on. Of course, each node is counted only once. The diameter of the system is then determined by the number of shells. Two more options which were considered are defining the diameter by the most highly populated shell, or by averaging over the shells. The diameter obtained by the various methods are quite similar.
The energy spectrum is calculated using the usual tight-binding Hamiltonian, where first term of the Hamiltonian stands for the disordered on-site potential on each node i of the network. The on-site energies, ε i are uniformly distributed over the range −W/2 ≤ ε i ≤ W/2. The second term corresponds to the hopping matrix element which is set to 1, and i, j denotes nearest neighbor nodes which are determined according to the network structure. We diagonalize the Hamiltonian exactly, and obtain N eigenvalues E i (where N is the number of nodes in the graph) and eigenvectors ψ i . Then we calculate the distribution P (s) of adjacent level spacings s, where s = (E i+1 − E i )/ E i+1 − E i , and . . . denotes averaging over different realizations of disorder and when relevant also over different realizations of node connectivities. One expects the distribution to shift as function of the on-site disorder from the Wigner surmise distribution (characteristic of extended states): at weak disorder to a Poisson distribution (characteristic of localized states) at strong disorder: An example for such a transition is presented in Fig. 1 where a scale-free graph with λ = 4 and m = 2 was considered. As W increases P (s) shifts toward the Poisson distribution. Additional hallmark features of the Anderson transition, such as the fact that all curves intersects at s = 2 and the peak of the distribution "climbs" along the Poisson curve for larger values of W are also apparent. Similar transition from Wigner to Poisson statistics is seen also for the other networks considered in this study.
The transition point can be determined more accurately from calculating [30] where γ → 0 as the distribution tends toward the Wigner distribution, and γ → 1 if the distribution approaches the Poisson distribution. One expects that as the system size increases, the finite size corrections will become smaller resulting in a distribution closer to a Wigner distribution in the metallic regime and to Poisson in the localized one. At the transition point the distribution should be independent of the system size. In Fig. 2 we plot the behavior of γ as function of W for several sizes of a scale free graph. Indeed, γ decreases with system size for small values of W while it increases with size for large values of W . All curves should cross around a particular value of disorder signifying the critical disorder. From finite size scaling arguments [30] one expects that γ around the critical disorder will depend on the the disorder and network size, L, in the following way: where C is a constant. This relation enables us to extract both the critical disorder W c and the critical index ν.
Scaling of the numerical data according to Eq. (7) yields two branches corresponding to the metallic and localized regimes, that are clearly seen in Fig. 3. The estimated values of ν and W c (see Table I) are extracted by fitting the branches to a 4th order polynomial. 11, 12 (where L is the number of "generations" of the tree). Another exception is for Erdös-Rényi graphs in which k is between 3 and 3.5. The low connectivity of the graphs, results in one main cluster and relatively large number of not-connected nodes (about 5%). Thus, the calculations are made only for the largest cluster of each realization, since a procedure that considers all the nodes is skewed by the eigenvalues of small disconnected clusters [31]. A clear localization transition is observed for a group of graphs which are all characterized by an average degree k smaller than 3.1, and an averaged last occupied shell l (for N = 1000 sites) larger or equal to 9.45. The results are summarized in Table I. The results for all the graphs (including those which show no clear signs of transition) can be scaled according to their average degree k . The higher the value of k is -the higher is the value of W needed in order to obtain a specific value of γ. Thus, the higher the average degree, the more metallic the system is, which makes sense. A cross section at γ = 0.6 of all curves is shown in Fig.4 as a function of k . The k of the networks studied in Fig. 4 as well as the averaged last occupied shell l for N = 1000 sites are presented in Table II.
The following observations can be gleaned out of the data for the different networks: (1) For all the networks that show a metal-insulator transition, ν is of order 1/2 except for the Random-Regular "double-peak" network which is the one with the highest value of connectivity that still shows a clear transition. A critical index of ν = 0.5 is expected for a system of infinite dimensionality. At k = 3.1 the value of ν is significantly higher, but so is the estimate of the error bar. On the other hand for the Erdös-Rényi graph with k = 3.1 no clear transition is observed.
(2) All networks with connectivity above 3.1 do not show clear signs of a metal-insulator transition. Nevertheless, one should be rather careful in interpreting this observation since, as is clear from Table II, larger values of k lead to smaller size, l, of the network for the same number of nodes. Moreover, from the two networks which have the same k = 3.1, only the one with the higher value of l shows clear signs of the metal insulator transition. Thus, the absence of transition may be an artifact of the small size of networks with high average connectivity.
(3) The critical disorder W c fluctuates in the range of 12−20 (Table I). Due to the small range of k (2.97−3.1), it is hard to determine any relation between k and W c .
(4) On the other hand, there is a clear relation between the amount of disorder needed in order to reach a particular value of γ (i.e., the value of W needed to reach a certain degree of localization) and k . As can be seen in Fig. 4, a linear dependence W (γ = 0.6) ∝ k is observed.
Thus, the gross features of the Anderson metalinsulator transition are similar for a wide range of different networks. The critical index for all the networks studied here are within the range expected for a system of infinite dimensionality, and the connectivity influences the degree of localization. Thus, complex networks are an example of a topology for which the critical index follows the mean field prediction, but the critical disorder can be tuned by the connectivity. On the other hand, the fact that networks with high connectivity are very compact raises the problem of identifying the transition point. It is hard to extend the usual finite-size scaling method to networks with high connectivity since the number of sites grows very rapidly with size, while for small network sizes the crossover behavior of the γ curves is very noisy. This results in an inability to clearly identify the Anderson transition, although it can not be ruled out the possibility that there is a critical connectivity for complex networks above which no transition exist.
|
2016-11-07T18:53:47.765Z
|
2005-09-11T00:00:00.000
|
{
"year": 2005,
"sha1": "02204309734c1581eb8c51b112312a6ad768fe22",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/cond-mat/0509275",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "02204309734c1581eb8c51b112312a6ad768fe22",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Medicine",
"Mathematics"
]
}
|
139102260
|
pes2o/s2orc
|
v3-fos-license
|
Influencing Factors of Pre-Exposure Prophylaxis Self-Efficacy Among Men Who Have Sex With Men
This research examines the level of pre-exposure prophylaxis (PrEP) self-efficacy among HIV-negative men who have sex with men (MSM) in China and identifies the influencing factors associated with the level of PrEP self-efficacy in terms of social-demographic characteristics and social psychological factors. The data were gathered from a baseline assessment of a longitudinal randomized controlled intervention trial. From April 2013 to March 2015, nonprobability sampling was used to recruit HIV-negative MSM at Chongqing, Guangxi, Xinjiang, and Sichuan in west China. A total of 1884 HIV-negative MSM were analyzed. Chi-square test and nonparametric rank sum test were used for univariate analysis. Multivariable linear regression analysis was used to discuss the factors that influence the level of PrEP self-efficacy. Overall levels of PrEP self-efficacy were low, and five factors were found to effect PrEP self-efficacy: age, residence, AIDS-related knowledge, PrEP-related motivation, and anxiety. Age and anxiety score were negatively related to PrEP self-efficacy. The higher the age and anxiety score, the lower the PrEP self-efficacy. AIDS-related knowledge and PrEP-related motivation were actively related to PrEP self-efficacy. The higher the knowledge and motivation score, the higher the PrEP self-efficacy. In addition, the PrEP self-efficacy level of MSM in rural areas is lower than that in urban areas. The lower level of self-efficacy in the MSM population needs to be improved. Pertinent interventions should be taken to promote the self-efficacy of PrEP in MSM, to enhance their willingness to take medicine, improve their medication adherence, and thus reduce HIV infection among MSM.
National Science and Technology major projects (2011)(2012)(2013)(2014)(2015), the protective rate in participants with high adherence was 50%, but there was no difference between those with low adherence and those without medication. This study suggests PrEP adherence affects the efficacy of PrEP. Studies have reported that the higher the individuals' self-efficacy, the greater the positive influence on the adherence to the behavior and the degree of effort (Coffman, 2008). So it is especially necessary to study PrEP self-efficacy.
Self-efficacy was first proposed by the American psychologist Bandura in 1977(Bandura, 1977. It refers to personal expectations and the subjective confidence of one's behavior. Personal expectations determine whether coping behavior will be initiated, how much effort will be expended, and how long it will be sustained in the face of obstacles and aversive experiences. Self-efficacy, as the determinant of individual behavior, is also the center of individual factors that often influences the choice and continuation of individual behavior. The higher the level of self-efficacy, the higher the level of behavior adoption, behavior maintenance, and degree of effort (Coffman, 2008). It is necessary to understand the level of self-efficacy of PrEP in the MSM population and to explore the factors affecting the self-efficacy.
The following studies have also been done involving psychosocial factors among Chinese MSM. In a qualitative interview in Shanghai, China, prevention strategies in the MSM population will be hindered due to MSM's sexual stigma and discrimination (J. X. Liu & Choi, 2006). In other studies of PrEP, self-efficacy was related inversely to involvement in HIV risk practices (Klein, 2014). Domestic and foreign studies of the factors affecting self-efficacy mainly focused on patients with chronic diseases such as diabetes, cancer, chronic obstructive pulmonary disease, and asthma (Champion et al., 2013;Du, Everett, Newton, Salamonson, & Davidson, 2012;Franks, Chapman, Duberstein, & Jerant, 2009;Hays, Finch, Saha, Marrero, & Ackermann, 2014;Hunt et al., 2012;Mancuso, Sayles, & Allegrante, 2010). Studies have reported that the main factors affecting self-efficacy include social demographic characteristics, physiological conditions, and social psychological factors. The studies regarding the factors affecting self-efficacy in the MSM population focused mainly on the use of condoms (Klein, 2014;Li et al., 2017;Traeen et al., 2014). PrEP studies are in the self-efficacy description phase, and there is seldom a further exploration of the influencing factors.
As an important part of social cognitive theory, selfefficacy theory can explain the relationship between human cognition and behavior. Many psychological studies suggest that action self-efficacy is an important factor influencing behavioral initiation and behavior persistence. The present study aimed to evaluate the self-efficacy of PrEP and identify the factors affecting PrEP self-efficacy among HIV-negative MSM in China from the prospective of sociodemographic characteristics and social psychology factors and to provide a theoretical basis for pertinent interventions.
Participants and Design
A total of 1,914 HIV-negative MSM were recruited at the baseline of a longitudinal randomized controlled intervention trial (a PrEP study of oral tenofovir among MSM in western China from April 2013 to March 2015. Registration Number: ChiCTR-TRC-13003849) according to the inclusion criteria; 30 participants did not meet the age criteria. In total, 1,884 MSM data were analyzed. A nonprobability sampling method was used to recruit participants in four research sites in west China including Chongqing, Guangxi, Xinjiang, and Sichuan, from April 2013 to March 2015. The main methods of recruitment included: (a) media publicity, such as publishing information on the MSM website; (b) cooperation with nongovernmental organizations (NGOs); (c) through the Centers for Disease Control and Prevention in each city; (d) after finding the "seed," through peer introduction and the "snowball" sampling method to find other research subjects; (e) recruitment from existing MSM cohorts of previous research projects. The inclusion criteria were as follows: signing informed consent; age ≥18 and ≤65; HIV antibody negative; participate in sexual intercourse once or more every 2 weeks; at least one or more male partners one month before the trial; willing to use the study medication under guidance and to obey follow-up arrangements; willing to participate in the trial for 96 weeks. A self-administered questionnaire survey on paper including sociodemographic characteristics, HIV-related knowledge, and psychological scales was collected.
Measurement
Sociodemographic characteristics were collected by an anonymous questionnaire survey. The survey included self-reported age, ethnicity and residence, education attainment, employment status, marital status, and average monthly income.
The PrEP self-efficacy scales were measured with eight questions. The scale is based on the revision by Galavotti et al. (1995) and has proven to be reliable. Subjects were asked "In the following situations, how confident are you to continue to use HIV preventive medicine?" The actual items of this scale are shown in Table 1. Answers for each item are as follows: 1 stands for strongly unconfident; 2 for unconfident; 3 for comparatively confident; 4 for confident; and 5 for strongly confident. The higher the total score of these items, the greater the self-efficacy of taking drugs. Usually, strongly unconfident and unconfident suggest a low self-efficacy level, while comparatively confident, confident, and strongly confident suggest a high self-efficacy level (Cronbach's α = 0.832).
A self-rating anxiety scale (SAS; Zung, 1971) was widely (J. Liu et al., 2012;S. Liu et al., 2014;Luo, Feng, Xu, & Zhang, 2014;Samakouri et al., 2012) used to evaluate the degree of anxiety because of its good reliability and validity (Zhao et al., 2012). SAS, scored by four grades, was mainly used to evaluate the frequency of the symptoms. Among all the items, 15 were positively scored while 5 were reverse scored. The scores of the 20 items were added together to obtain the raw score, and then multiplied by 1.25, and the integer was taken as the standard score. Usually, 50 was the cut-off value. A total score of anxiety below 50 points was considered normal, 50-59 as mild anxiety, 60-69 as moderate anxiety, and above 70 as severe anxiety (Cronbach's α = 0.774).
The Center for Epidemiological Studies Depression Scale (CES-D; Radloff, 1977) consists of 20 items, rated on a scale of 0 to 3, with a total score of 60 points. Usually, 16 points (Makambi, Williams, Taylor, Rosenberg, & Adams-Campbell, 2009) is the cut-off value, and ≥16 indicates depression symptoms. Cronbach's α coefficient has high internal consistency between 0.85 and 0.90.
AIDS-related knowledge included 13 items based on the revision of the International AIDS Knowledge Survey General scale {DiClemente, Zorn, & Temoshok, 1986;Galavotti et al., 1995;Koopman, Rotherman-Borus, Henderson, Bradley, & Hunter, 1990}, involving the infection, spread, and treatment of AIDS. The items of this scale are shown in Table 2. The answers were set as True, False, or Don't know. There was 1 point assigned for a positive answer, 0 for a negative answer, and "not known" was rated as negative. The higher the score, the more information is known. These items compose of the AIDS-related knowledge score (Cronbach's α = 0.84).
PrEP-related motivation was measured with 11 items. The scale is a self-designed questionnaire by experts (see Table 3 for details). Its range is 11-55. It includes risk perception of AIDS, negative effects of drugs, knowledge and attitude of sexual partners, objective factors of drugs, and the interaction between doctors and patients. It has been proven to have good construct validity and content validity and has been used in previous studies (He & Zhong, 2014) A 5-point Likert scale was used to measure the answer of motivation (1 = completely no and 5 = always). An adverse value was used in questions No. 4 to No. 9. These items compose the PrEP-related motivation score. The higher the score, the greater the PrEP motivation (Cronbach's α = 0.720).
Statistical Analysis
Epidata 3.1 was used to establish a database and doubleentry data and logical verification were conducted. IBM SPSS 21 was used for data processing analysis. Measurement data were statistically described using the mean, standard deviation, median, and extreme values. The count data were described by frequency distribution. When you are (recently) drinking or using other drugs When your partner is upset with it When you feel it has side effects When the trouble of HIV is too much When you think that your partners will be angry with the use of HIV preventive drugs When you think the risk of AIDS is very low When you have used other protective measures such as condoms The mean ± standard deviation was used to describe the self-efficacy level, knowledge score, anxiety level, and depression level. Chi-square test and nonparametric rank sum test were used for univariate analysis, while multiple stepwise regression was used for multivariable analysis. The inclusion and removal criteria were 0.05. The statistical significance level was p values < .05.
Social-Demographic Characteristics
A total of 1,914 MSM were recruited, among which 1,884 MSM were analyzed according to the inclusion criteria with ages ranging from 18 to 58, and a median age of 28. Most participants were unmarried (74.31%), of Han nationality (92.78%), employed (77.57%), urban residents (71.99%), had a college or higher degree (61.48%), and reported a monthly income below 3,000 RMB (52.44%; Table 4).
Self-Efficacy Level
The results showed that the minimum self-efficacy score of 1,884 MSM was 8 points, while the maximum was 40 points, with a mean score of 22.24 ± 6.40. Of the participants, 1,133 (60.14%) felt strongly unconfident or unconfident using HIV prevention drugs, and 751 (39.86%) felt confident using HIV prevention drugs.
Anxiety and Depression Status
The mean SAS and CES-D scores for 1,884 MSM were 41.57 ± 10.30 and 17.96 ± 10.55, respectively. Among them, 401 participants had anxiety symptoms, accounting for 21.28% of participants, and the mild, moderate, and severe anxiety symptoms were 16.61%, 4.03%, and 0.64% of participants, respectively. There were 946 participants with depression symptoms (50.21%).
Knowledge and Motivation Status
The mean scores of AIDS-related knowledge and PrEPrelated motivations were 8.28 ± 2.36 and 40.19 ± 5.84, respectively. Only 17.57% of the 1884 MSM participants scored above 10 points on the AIDS-related knowledge test, while 82.43% scored below or equal to 10 points.
Univariate Analysis of PrEP Self-Efficacy in MSM Population
Social-demographic univariate analysis of PrEP self-efficacy in MSM. The results showed that self-efficacy differs in residence, education, and marital status (p < .05). The self-efficacy of the MSM population in urban areas is higher than that in rural areas. The self-efficacy of those whose education was above college level is higher than those with an education level below high school. The selfefficacy scores of those who are unmarried are higher than the score for those who are married or divorced. There was no significant difference in the self-efficacy of the MSM population with regard to age, ethnicity, employment status, and monthly income (p > .05; see Table 5).
Multivariable Analysis of the PrEP Self-Efficacy Level Among MSM
Independent variable values. Self-efficacy is a dependent variable. The social-demographic characteristics, knowledge score, motivation score, anxiety, and depression were set as independent variables with p < .15 after univariate analysis. The inclusion and exclusion criteria were set at 0.05. The independent variable values are presented in Table 6. Table 7 presented that five statistically significant factors entered the regression equation. According to the standard partial regression coefficient, the factors affecting self-efficacy were age, residence, motivation, anxiety, and knowledge score. The results showed that the higher the AIDS-related knowledge and the PrEPrelated motivation score, the higher the PrEP self-efficacy level. The older the age, the lower the PrEP self-efficacy level. Those who registered as living in a rural area had a low PrEP self-efficacy level. The higher the anxiety score, the lower the self-efficacy of PrEP.
Discussion
Many psychological studies suggest that action self-efficacy is an important factor influencing behavioral initiation and behavior persistence. The results of this study show that the PrEP self-efficacy level was low among HIV-negative MSM in China, with an average score of 22.24 points. Over 60% participants felt strongly unconfident or unconfident using PrEP. The reason may be that in China, the MSM population is viewed as immoral and unacceptable, and even the mere perception of stigma can influence individual behavior, adding to chronic stress and detracting from physical and psychological well-being. The particular population suffered from fears, prejudices, and discrimination, and their inner health beliefs were weak.
The AIDS-related knowledge score was low among Chinese HIV-negative MSM, with an average score of 8.28. Only 17.57% participants answered more than 10 questions correctly. PrEP-related motivation is at a moderate level, with an average score of 40.19. Overall, AIDS-related knowledge and the PrEP motivation level among HIV-negative MSM in this study was not high. The proportion of anxiety and depression symptoms in the MSM population was relatively high, with average score of 41.57 and 17.96, respectively. Among the 1,884 MSM participants, 21.28% had anxiety symptoms and 50.21% had depression symptoms.
Further analysis shows that sociodemographic factors, AIDS-related knowledge, PrEP-related motivation, and anxiety are all related to PrEP self-efficacy. The older the age, the lower the PrEP self-efficacy. This is consistent with most studies at home and abroad. As age increases, the MSM population suffers from gradual deterioration of physical function, and as self-perception declines, the sense of self-existence decreases. It was also identified that MSM in rural areas had lower self-efficacy of PrEP compared to urban areas. In rural areas, MSM are conservative, lack access to health knowledge, and lack of confidence in carrying out healthy behaviors, so their self-efficacy is low. For rural and high-age populations, health education and knowledge promotion were used to improve the PrEP self-efficacy. This survey shows the significant differences in AIDSrelated knowledge, PrEP motivation, and PrEP self-efficacy. The more the AIDS-related knowledge and the stronger the motivation for PrEP, the higher the PrEP selfefficacy. According to the Knowledge, Attitude and Practice (KAP) theory, knowledge can help participants establish correct health beliefs and increase their confidence. In motivation constructs, the risk perception of AIDS, negative effects of PrEP, knowledge and attitude of sexual partners, and objective factors of PrEP all affect PrEP self-efficacy. Negative effects of PrEP are related inversely to self-efficacy. The risk perception of AIDS and the interaction between doctors and patients are actively related to the self-efficacy. Bandura reported in his "desensitization" study that mental state is one of the key factors affecting self-efficacy. The study suggests that PrEP self-efficacy has a negative correlation with anxiety level, while there was no significant difference between different depression levels. This result is consistent with previous findings in other populations (Razavi, Shahrabi, & Siamian, 2017). Therefore, we should continue to popularize AIDS knowledge and PrEP-related knowledge, and strengthen MSM psychological intervention to promote health. This study suggests that in the process of PrEP intervention among HIV-negative MSM in China, it is necessary to analyze the general self-efficacy, psychological state, and PrEP-related motivation of the MSM population. For MSM, we should strengthen the propaganda of routine AIDS prevention and control knowledge, conduct health education and psychological guidance, reduce the incidence of anxiety and depression, so as to improve the selfefficacy of AIDS prevention. Adjusting the intervention strategy is of great significance to the prevention and control of AIDS. Regarding the older and rural MSM, we should support them and enhance their sense of self-efficacy. We should strengthen the intimate relationship between medical workers and the MSM population to make the population feel social warmth and support, to improve their self-efficacy.
There are some potential limitations in this study that deserve our attention. First, this study is a cross-sectional study that only examined the level of self-efficacy of MSM at the current stage and did not observe the dynamic change of PrEP self-efficacy in MSM. Therefore, this problem has yet to be further studied. Second, selfadministered questionnaires were used. Because the content of the survey involves sensitive issues and the questionnaires were completed with the help of the investigators, some of the participants may not have been willing to answer the relevant questions, resulting in bias and missing data.
The results of this study are similar to those of other Chinese research populations, and they are more credible. The results of this study are of great significance to the MSM population's willingness and adherence to PrEP. Table 6. Assignment Sheet of the Influencing Factors of PrEP Self-Efficacy.
Characteristics Assignment
Age Initial data Residence 1 = urban 2 = rural Education attainment a 1 = junior high school and below 2 = high school 3 = college, undergraduate training or higher Marital status a 1 = single 2 = married 3 = divorced Monthly income a 1 = ≤3,000 2 = 3,000-5,000 3 = ≥5,000 Anxiety score Initial data Depression score Initial data Knowledge score Initial data Motivation score Initial data Note. a All are dummy variables when entering the model. Tailored interventions should be taken to strengthen the health beliefs of the Chinese HIV-negative MSM and improve the self-efficacy of PrEP among MSM populations, thereby increasing the willingness of MSM to take medications, improve drug compliance, and reduce HIV infection among MSM populations.
|
2019-04-30T13:03:38.752Z
|
2019-03-01T00:00:00.000
|
{
"year": 2019,
"sha1": "31d73204cc0fa62b4b1aa36a631ebd124c5b1066",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.1177/1557988319847088",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "31d73204cc0fa62b4b1aa36a631ebd124c5b1066",
"s2fieldsofstudy": [
"Sociology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
225422364
|
pes2o/s2orc
|
v3-fos-license
|
Unraveling the concept of local seeds in restoration ecology
Scientific works converge toward the importance of using seeds of local origin in restoration to limit biodiversity loss and increase ecosystem resilience. Efforts are made to define what should be considered as local seeds. However, the concept of local seeds remains complex to delimit both scientifically and operationally, and carries non‐neutral assumptions that impact restoration activities. This article aims to unravel the concept by examining its construction using a social science approach crossed with ecology. The interest for the genetic origin of plant material has developed since the 1990–2000s, in a context of international debates on biodiversity conservation. The delimitation of the local seeds concept necessarily integrates paradoxical assumptions: one of the major ones is that the local character of a plant is relative to both the reference ecosystem and the species considered. Moreover, it also depends on the objectives of restoration, the feasibility of the chosen method for restoration and the regulations. To overcome these paradoxes, compromises and translations are made to delineate collectively and operationally what is local. By adding a cross perspective between social sciences and restoration ecology to the debate, we highlight that the constructions of the local seeds concept integrate a diversity of ecological, sociotechnical, and economic assumptions that are not neutral for restoration. This perspective on the concept, its ambiguities, and its contingencies leads us to to underline the importance of reflexive and integrative approaches to work at different scales on standards for the use of local seeds in restoration.
• The concept of local seeds is relative and results from a collective construction to meet a need.
• To meet the operational needs of ecological restoration, the constructions of the concept of local seeds integrate a diversity of ecological, sociotechnical, and economic considerations.
• To understand how the concept is constructed, what compromises and translations it contains and on which paradigms it is based, social science approaches are complementary to those of ecology.
• To stabilize common operational definitions at different scales, integrative standards are necessary.
Opening the "Black Box" of the Local Seeds Concept In restoration ecology, scientific papers and recommendations converge on the importance of using plant material of local origin (Sackville Hamilton 2001;Bischoff et al. 2010;Vander Mijnsbrugge et al. 2010). However, reaching consensus on what can be considered as local seeds remains complex, both scientifically and operationally. Here we revisit the local seeds concept by adding a social science approach crossed with ecology to the debate. To do so, we take up the concept of translation, which designates the processes of linking natural, technical, and social elements leading to the production of new scientific or technical statements (Latour 1987). These translations link heterogeneous issues and proceeds to different alliances to arrive at a new statement intelligible to others actors. For instance, operational guidelines for restoration stakeholder result from translations that transform theoretical statements by integrating sociotechnical assumptions, economic configurations, restoration networks, regulations, strategies, and management philosophies. These translations operate within particular contexts that guide research work. They involve interactions and negotiations between various rationalities and interests, those of restoration stakeholders and symmetrically of scientists. Latour (1987) showed the limits of the model of diffusion that consists in focusing on the way scientific statements spread in society and leaving the scientific complexity in "black boxes" of made science. The model of translation he elaborated consists in considering the construction of scientific statements in its social interactions, before these statements stabilize in black boxes that are then closed. In this perspective, we open the "black box" of the "local seeds" concept to trace back its construction processes and focus on how scientific production on local seeds is being co-constructed in interaction with society. Considering local seeds in this perspective is to hypothesize that (1) there is not a proper local character of which scientists should gradually reveal the features, that (2) the concept of local seeds is not neutral but collectively co-constructed, integrating various issues to answer a need in a given context, and that (3) examining the process of this construction allows to better understand it and to help improving these processes by making them more reflexive and integrative.
To develop operational knowledge, the construction process should include not only dedicated exchanges including the various scientific and operational stakeholders in a co-construction approach, but also an identification of assumptions and "rational myths" (Hatchuel et al. 1987) at stake. Indeed, scientific and operational concepts such as "local seeds" can be seen as "a nexus of assumptions, rational myths, belief systems, hypotheses and material constraints which stem from broader institutional forces, intervene in the building of patterns of actions, and open new performance possibilities and inventions" (Labatut et al. 2012). In many cases of local seed delimitation, operational stakeholders are already taken into account or associated in the knowledge construction. In contrast, the "local seed" concept is often "naturalized" and the identification of its underlying assumptions is little implemented and discussed. Therefore, we want to emphasize that scientific knowledge about local seeds in ecological restoration has developed in historical contexts, is based on different scientific positions, and integrates feasibility issues encountered in the field. From there, we develop several items that help explain the difficulties in agreeing on a common definition of local origin of seeds. Finally, we focus on the compromises to overcome the paradoxes of restoration with local seeds. The challenge is to agree collectively on the contours of the concept in order to delineate operational common standards at different scales.
Emergences of the Local Seeds Concept
First Emergence in the 1990-2000s The topic of the genetic origin of plant material used in restoration projects emerged in the 1990-2000s in research (Fig. 1), in parallel to international debates on biodiversity issues. The Convention on Biological Diversity (CBD) in 1992 marks an institutionalization of biodiversity as a political and a societal issue. Its definition of biodiversity includes genetic diversity, establishing diversity within species itself as a conservation issue (Sackville Hamilton 2001). The idea of genetic erosion, developed since the late 1950s, has gradually led to the awareness of the loss of local crop varieties. As a reaction, in situ conservation projects of "crop diversity" have been implemented since the 1990s (Fenzi & Bonneuil 2016). In the notion of agricultural "genetic resources," the gene is conceived as "the proper unit of biodiversity" (Fenzi & Bonneuil 2016). This focus shift toward the genetic level in conservation also occurs in restoration ecology, redirecting its research agenda. The idea of favoring plant material of local origin for ecological restoration has developed ( Fig. 1) in this context of institutionalization of the conservation of genetic diversity, to give birth to the "local-is-best" paradigm (Broadhurst et al. 2008;Jones 2013;Breed et al. 2018).
A Need Arising From Identified Ecological Hazards
Scientists in the field of restoration ecology have promoted the use of local seeds to avoid various risks linked to the introduction of nonlocal plants (Vander Mijnsbrugge et al. 2010). Nonlocal plants may have lower fitness than local flora, can be maladapted to the environment (Moore 2000;Bischoff et al. 2006;Breed et al. 2018), and may hybridize with local flora leading to outbreeding depression (Moore 2000;Sackville Hamilton 2001). Moreover, nonlocal plants with high phenotypic plasticity may outcompete local flora (McKay et al. 2005;Bischoff et al. 2006) or negatively interact with other organisms as their reproductive cycles differ from local plants (Sackville Hamilton 2001;Bucharova et al. 2019). In addition, the choice of local seeds can be justified by the precautionary principle (Moore 2000;Jones 2013).
A Developing Concept With Variable Boundaries and Names
However, the concept of local seeds remains ambiguous (Breed et al. 2018), giving room for various interpretations or even misunderstandings. Recent works strive toward precise general principles and standards, but they underline the multiplicity of elements to take into account and the need for local guidelines (Gann et al. 2019;Pedrini & Dixon 2020). Moreover, a variety of terms is used to qualify these seeds (Fig. 1). The term "local" can refer to a "previously existing genotype at a site" (Hufford & Mazer 2003), or in a broader sense "to mean that the populations originate where found, and by extension, are adapted to local environmental conditions" (Vander Mijnsbrugge et al. 2010). "Native" and "indigenous" among others are considered as synonyms (Hufford & Mazer 2003), despite the fact that each of them has its own history and connotations before being used to qualify restoration seeds.
The Scientific and Technical Foundations of the Concept: Revegetation Goals and Paradigms
The criteria for supplying restorative seeds depend on the operation's goals, which may differ and confront each other. Couix and Hazard (2013) have shown that these goals rely on different Restoration Ecology November 2020 paradigms (i.e. theoretical conceptions) of conservation and ecological restoration. Within biodiversity studies, there are different scientific positionings that associate both research approaches and general visions of ecological issues, which Granjou and Arpin (2015) call epistemic commitments. These commitments impact knowledge produced for restoration (Rodriguez et al. 2018) and orient in particular the choices of priority goals.
Indeed, depending on whether one seeks to restore a gene pool, a set of taxa, a dynamic, or service, the seed supply requirements vary. According to the Society for Ecological Restoration (SER) standards, ecological restoration aims to "achieve ecosystem recovery, insofar as possible and relative to an appropriate local native model (termed here a reference ecosystem)," while rehabilitation focuses on the restoration of ecosystem functionalities, "without seeking to also recover a substantial proportion of the native biota" (McDonald et al. 2016). The positioning in these different types of environmental repair efforts and their goals is decisive in the seed supply criteria.
Relativity of the Local Seeds Concept Drawing Boundaries on a Continuum of Nativity
The local or nonlocal character of a plant is relative in relation to an environment to be restored or a reference ecosystem. In the survey of Smith and Winslow (2001) on perceptions of native status, a respondent answered: "Nativity is a continuum and we humans want to categorize. So there is inherent conflict. The truth is that there are shades of nativity. But practically we do have to draw lines sometimes." A binary separation between local and nonlocal does not reflect the gradient of local origin inherent to the continuum of the living. Therefore no systematic criterion delineates the desirable plant material for revegetation.
Reference Ecosystems and Arbitration Between Restoration Goals, Decisive Choices for Seed-Sourcing Strategies
The choice of seeds depends on the restoration goals and subsequently on the reference ecosystem chosen for the restoration project, which can be historical or contemporary (McDonald et al. 2016). A contemporary reference ecosystem aims at reconstitution of a flora similar in terms of composition of a nearby site, whereas targeting a historical reference ecosystem leads to taking into account further elements such as the management of the site during the reference time. There is an evolutionary normativity in the choice of the reference: Moreau et al. (2019) have thus shown that the landscape reference of open rather than forest environments resulted from a historical construction, dating from the 1990s in the case of the French Causses. In many cases, the sites to be restored already have known uses that have strongly deviated them from their previous states (Broadhurst et al. 2008;Jones 2013;Breed et al. 2018) and local seeds may not be the better option (Broadhurst et al. 2008;Jones 2013). In this perspective, Jones (2013) states that the widespread "local-is-best" assumption should be nuanced: local stricto sensu may be better and should be preferred when no data supports the opposite. Broadhurst et al. (2008) even argue that "failure by scientists to recognize that many of the assumptions underlying the local is best paradigm are without a strong scientific basis serves to maintain misconceptions among practitioners." The restoration goals, determining the selection of plant material, can interact and require arbitration (Table 1). This is [Correction added on 13 November 2020 after first online publication: the word 'deviated' was inserted in between 'strongly' and 'them' in the fourth sentence of the subsection 'Reference Ecosystems and Arbitration Between Restoration Goals, Decisive Choices for Seed-Sourcing Strategies').] in evolutionary thinking (Gould & Lewontin 1979), for which natural selection leads to optimization. Jones (2013) argues that exceptions to the "local-is-best paradigm" are reported more and more often, in particular because of the strong alterations of the environments to restore. Seed-sourcing strategies like "composite provenancing" and "admixture provenancing" (Table 1) recommend using seeds from various sources to increase genetic diversity, enhance adaptability, and limit both risks of inbreeding and of outbreeding (Broadhurst et al. 2008;Breed et al. 2018;Bucharova et al. 2019). Jones (2013) regrets that this type of initiatives is discouraged by what he calls a "belief in the merit of local plant material," as well as an emphasis on the risk of outbreeding depression. Finally, "predictive provenancing" and "climate adjusted provenancing" (Breed et al. 2018;Bucharova et al. 2019), other seed-sourcing strategies, even recommend to use nonlocal seeds from areas ecologically similar to the future target area, given predicted climate change. Global changes indeed upset the concept of local environment with stable conditions (Broadhurst et al. 2008). Such provenancing strategies are forms of organizational intervention that involves accepting risks, uncertainty, and partial knowledge (Hatchuel et al. 1987). To propose the solutions expected from them, scientists must therefore unfold chosen logics that combine ecological foundations with management philosophies and representations of relational organizations in restoration (Hatchuel & Weil 1992). These strategies result from the linking and translation of these different issues into unified statements (Table 1; Fig. 2). They rely on what Hatchuel et al. (1987) calls "rational myths," both rational and limited by the empirical constraints and assumptions, but allowing to mobilize around a representation: in our case, the representation of conservative, adaptive, or interventionist paradigms in what we call paradigmatic gradient (Table 1).
Geographic, Ecological, or Genetic Proximity: Compromises Directed by the Requirement of Feasibility
Sourcing local seeds brings up another question related to that of goals: should preference be given to the geographical proximity of the seed sampling area, to its ecological similarity to the area to restore, or to its genetic or phylogenetic proximity to reference vegetation? Plants from adjoining areas may be less adapted than others from more distant but ecologically close to the habitat to be restored (Vander Mijnsbrugge et al. 2010). The geographical proximity allows defining pragmatically supply areas in plant material. . In order to evaluate the genetic identity between the plant material of restoration and the target flora, it is also possible to carry out genetic studies using molecular markers. However, these studies feasible for research purposes in the field of restoration genetics (Hufford & Mazer 2003) are not carried out systematically for restoration operation on each species. Such studies are therefore more useful to refine the definition of source zones criteria and can serve to delineate seed zones.
The Seed Transfer Zones, Operational Translations of Scientific Compromises
The delimitation of seed transfer zones allows to a certain extent combining criteria of geographical proximity and ecological proximity. The idea is to map regions within which plant material can be collected for use in restoration operations in the same region. However, belonging to the same seed transfer zone does not guarantee, beyond a very small scale, neither the genetic connection nor the habitat similarity. This is especially true in mountain areas, where topographic and climatic barriers can hinder gene flow (Schönswetter et al. 2005), and where a same-seed transfer zone includes a wide range of altitudes and climates. Moreover, genetic diversity and adaptation patterns vary by species (Basey et al. 2015), depending on their dispersion, their pollination modes, and their longevity (Wilkinson 2001;Jones 2003;Broadhurst et al. 2008;Malaval et al. 2010;Vander Mijnsbrugge et al. 2010). Thus, for Jones (2003), the choice of the gene pool to be used for revegetation depends on the species' pattern of genetic variation, which can be more or less continuous or discrete along a spatial gradient. Plant fitness decreases with their degree of heterozygosity, which is lower in inbred, small, and isolated populations (Vander Mijnsbrugge et al. 2010). The delimitation of seed transfer zones should therefore ideally be defined for each species (Bower et al. 2014;Bucharova et al. 2019), which seems unfeasible from an operational point of view.
Despite all these theoretical complications to delineate an acceptable origin for plant restoration material, there is a need for guidelines for seed supply (Bower et al. 2014). New translations (Fig. 2) are then necessary to overcome paradoxes and reach operational compromises.
Overcoming Paradoxes Through Compromises From Ecological Knowledge to Operational Guidelines: a Series of Translations That Integrate Sociotechnical and Economic Issues
The paradoxes of local seeds stem from a fundamental paradox in ecological restoration: seeking to "'assist recovery' of a natural or semi-natural ecosystem" (McDonald et al. 2016) through anthropic intervention, although the difficulty of this enterprise is widely recognized (Palmer et al. 2006). Anthropic intervention involves taking into account the technical feasibility, the temporality of the intervention, and monitoring, as well as its financing. For seed supply, this translates into a selection that may include ecological criteria, but must necessarily include operational criteria such as seeds' availability, production costs (Broadhurst et al. 2008), or even the possibility to produce seeds of the targeted species. If local seeds are identified by scientists as generally the best solution (Sackville Hamilton 2001; Bucharova et al. 2019), its practical translation is very variable, as scientists condense their knowledge to make operational guidelines, and can be too restrictive to reach an operational level (Broadhurst et al. 2008). Practical guidelines require agreeing collectively on a common delimitation of what is considered as local seeds, even if that implies enlarging the definition to the point of integrating paradoxes. Such conception work integrates stakeholders at different levels: conservation and regulation actors, seed harvesters, producers, restoration practitioners, insofar as each brings a knowledge in terms of feasibility. Such participatory approaches already apply to define guidelines, and we state that they should be implemented as early as possible in the translation chain.
Delimitation of Seed Transfer Zones and Species Lists Building for Regulation Through Negotiations With Stakeholders
In terms of recommendation, the first criterion that needs to be clarified is the definition of seed transfer zones (Breed et al. 2018). This results in the delimitation of operational seed zones, applicable for all species, and which limit the risk of maladaptation of the seeds used. In different countries, seed transfer zones have thus been delimited, giving rise to guidelines, regulations, or collective marks (Tischew et al. 2011;Bower et al. 2014;Shaw & Jensen 2014;Basey et al. 2015;Jørgensen et al. 2016;Abbandonato et al. 2018;Bucharova et al. 2019;. These zones, which can be delineated on geoclimatic, biological or genetic criteria according to the provenancing strategies (Table 1), result from several arbitrations. Their definition implies creating the same artificial boundaries for all species. To provide sufficient economic opportunities for seed companies, their area may be large. In most of the countries where zones were delineated, these were designed according to the strategies of relaxed "local provenancing" or "regional admixture provenancing" strategies (Table 1), except for Norway where genetic patterns were used (Jørgensen et al. 2016;). The delimitation is also subject to compromises between different stakeholders, as shown by the example of the "Alpes" zone delimited in France for the collective mark Végétal Local. While scientists had a preference for separating the northern Alps from the southern Alps, the seed companies argued that the potential market for such limited areas could not sustain a sector. The stakeholders have therefore agreed to delimit a single Alpes zone. Comparable arbitrations took place in Germany to ensure the practical implementation of the mapping (Bucharova et al. 2019). The delimitation in discrete zones induces paradoxes well noticed by the stakeholders of the restoration. In particular, the rule assumes that if a restoration project is located near a border, it is acceptable to source from another end of the seed transfer zone, but not just across the border . Within these zones, all species can theoretically be collected to restore any type of environment. Finally, the collection site has to have never been seeded, which is difficult to verify. These paradoxes can lead to abuse or mistakes.
Despite these paradoxes, scientists (Bower et al. 2014;Jørgensen et al. 2016;Bucharova et al. 2019) underline that the solution of seed transfer zones is the most desirable and feasible. In Europe, the Directive 2010/60 has instituted the possibility of producing local seeds for restoration, as long as collection and production take place in limited seed transfer zones. To avoid the exaggerated use of plants within the same zone, in Germany, for nonwoody plants the mapping of the zones is completed by lists of species for each zone (Bucharova et al. 2019). The combination of seed transfer zone and species lists has also been initiated, in connection with seed harvesters and producers, in France for the Alpes zone (Huc et al. 2018). In addition to species selection based on ecological criteria, agronomic criteria to allow their collection and multiplication, and regulatory ones to enable their commercialization have to be taken into consideration (Leger & Baughman 2014). However, proposing standard seed mixtures for all the restoration projects in a seed transfer zone can lead to introducing new species in an area.
Multiplication and Its Paradoxes, One More Compromise With Nativity
The possibilities of direct harvesting being limited and not sufficient to meet the demand, seed production by multiplication is required (Tischew et al. 2011;Abbandonato et al. 2018). The devices of seed transfer zones must therefore be associated with multiplication rules that guarantee the genetic origin and the diversity of the seed produced. Multiplication of local seeds is in itself paradoxical since it consists in cultivating plants intended to initiate dynamics for the reconstitution of environments that tend toward natural ones. The implementation of multiplication rules implies once again translating a conception of the nativity into operational guidelines. In fact, genetic selection happens at all stages: during collection on the local sites, growing at the multiplication site, harvesting, drying and cleaning of harvested seeds, transport, and seeding/germination/establishment on the restoration site (Basey et al. 2015). The risk of reducing genetic variability can be limited by requirements but cannot be avoided. Moreover, multiplication excludes some species, due to absence of knowledge on how to germinate or to harvest them, or because their breeding is more expensive than their harvest in the wild.
All stages of multiplication require compromised frameworks between limiting the risk of genetic impoverishment and not making production too difficult.
Prioritization of Criteria Between Availability and Nativity
In prioritizing different criteria, a decisive criterion for the choice of seeds is their availability in sufficient quantities at Restoration Ecology November 2020 the desired moment. This is why several authors (Jones 2003;Breed et al. 2018;Bucharova et al. 2019) defend the interest of sowing nonlocal seeds of local species-for instance what Bucharova et al. (2019) call "native cultivars"-if it is not possible to source local seeds. In order to avoid giving way to invasive plants, the criterion of local genetic origin takes second place. A large part of revegetation operations are carried out indeed with seed mixtures composed of agricultural and horticultural cultivars, and wildflowers of unknown or nonlocal origin (Tischew et al. 2011;Ladouceur et al. 2018;Bucharova et al. 2019) which are available in large quantities. However, these are selected and bred for fodder production for domestic livestock and not for ecological restoration purposes. Their selection must meet the standards of seed regulation and consequently suffer low phenotypic plasticity and genetic variability. Considering nonlocal seeds as a lesser evil is an argument for the status quo of cultivar use. The guidelines for restoration practice must therefore be formulated strategically to avoid the misappropriation of nuanced scientific conclusions.
Constructing Standards Collectively for a Collective Recognition
To be collectively recognized by both actors of restoration and scientists, the local seeds delimitation must be feasible with the means of practitioners and consistent with the research results. Different complementary scales are relevant to collectively define the modalities required for local seeds. The SER standards provide global recommendations that can guide the local requirements. International and national standards are also set up on the basis of regulations, as with European Directive 2010/60 (Tischew et al. 2011;Abbandonato et al. 2018). At national and local levels, operational translations of the concept of local seeds are formalized in standards, rather than on the tripartite model of certification or collective mark (Fouilleux & Loconto 2017;Bucharova et al. 2019;. In all cases, the standards are negotiated in order to delimit an operational definition of local seeds.
Conclusions
The concept of local seeds results from a construction process, which links and integrates natural, technical, and social elements. It is collectively designed to meet the needs and criteria of ecological restoration, and can be defined in different ways, none of which is neutral. The approach in social science crossed with that of ecology allowed us to open the black box of the concept and shed light on its origins, networks of translations, and paradoxes. This helps to better understand the dynamics between science and restoration practices and to better support the development of local seeds.
The different scientific and technical conceptions of restoration seeds and the underlying assumptions are so far little stated and discussed. We believe that the diversity of restoration conceptions, and objectives should be more explicit in an ecological debate that should integrate the operational stakeholders from the knowledge construction. Unraveling the underlying conceptions and assumptions invites a more reflexive and integrative construction of the concept of local seeds.
By making explicit non-neutral elements in the construction of scientific knowledge through the model of translation, we wish to open a dialogue between social sciences and ecology allowing access to the social processes of scientific construction. To our knowledge, there is no social science work on local seeds, although ecologists deal with organizational issues in position or policy articles (Tischew et al. 2011;De Vitis et al. 2017Abbandonato et al. 2018;Ladouceur et al. 2018). By shedding light on the sociotechnical mechanisms involved, the social sciences can provide a reflexivity on scientific work in perpetual progress. Conversely, the investigation of ecological theories allows social scientists to better understand the principles underlying the actions to develop local seeds in restoration.
The diversity of paradigms and objectives for restoration invites us to integrate the stakeholders not only in the operationalization of knowledge but also in its scientific construction, by opening the paradigmatic debate, beyond the scientific sphere, to all stakeholders. The paradoxes inherent in the definition of local seeds are already raised by different stakeholders involved in the revegetation and can even be used to discredit the work on local seeds. The challenge is to avoid the relativism that all seeds are equal regardless of their origin. For all these reasons, it is necessary to support the development of robust common standards to frame the use of local seeds. Meanwhile, climate change is upsetting the restoration issues and is likely to bring about changes in knowledge and practices (Temperton 2007). Thus, there is a need both for a formalization of the definition of local seeds to promote the operational transition, and for a continuing research on restoration that includes social sciences and interdisciplinary approaches as well as ecology.
|
2020-08-06T09:07:27.779Z
|
2020-08-05T00:00:00.000
|
{
"year": 2020,
"sha1": "2e0dce6ad953a315ea2d23bb23d7cbceb876bd2a",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/rec.13262",
"oa_status": "HYBRID",
"pdf_src": "Wiley",
"pdf_hash": "eac624d36fafda48bcf55fd1004baf194e3b27c5",
"s2fieldsofstudy": [
"Environmental Science",
"Sociology"
],
"extfieldsofstudy": [
"Geography"
]
}
|
252249632
|
pes2o/s2orc
|
v3-fos-license
|
The Influence of Quality of Pharmaceutical Installation Services on Satisfaction of BPJS Outboard Patients in General Hospital dr. GL Tobing
Patient satisfaction in hospital pharmacy services to support the quality of life of outpatients. Data on BPJS outpatient visits in July 2018 decreased by 21.5% at dr. GL. Tobing in Tanjung Morawa. The waiting time for pharmaceutical services in taking drugs is > 60 m+minutes and even taking drugs can be two to three days, it is suspected that the quality of pharmaceutical services has not been effective in the Pharmacy Installation. The purpose of the study was to analyze the effect of the quality of pharmacy installation services on the satisfaction of BPJS outpatients at the dr. GL. Tobing Tanjung Morawa. This type of research is quantitative with an explanatory research approach. The sample is 100 BPJS outpatients. The research was conducted in November 2019 - February 2020 through the distribution of questionnaires. Data were analyzed by univariate, bivariate and multivariate using multiple logistic regression test at a significance level of 95%. The results showed that service quality had a significant effect on BPJS outpatient satisfaction on the dimensions of reliability (p = 0.019), responsiveness (p = 0.013), assurance (p = 0.026), direct evidence ( tangible) (p=0.006), and empathy (p=0.003). The chance of patient satisfaction related to the quality of good pharmaceutical services is 86.23%. The conclusion that the dimensions of pharmaceutical service quality consist of reliability, responsiveness, assurance, direct evidence and empathy affect BPJS outpatient satisfaction. The dominant empathy variable affects patient satisfaction. The dominant empathy dimension affects patient satisfaction. It is recommended that the hospital management plan the proposed addition of pharmacists, drug planning, the availability of online entertainment services in the form of Wireless Fidelity (Wifi), regular communication training to officers and complete the identity and cellphone number in the patient's medical record.
Introduction
The hospital is one that provides services to the community, especially health services. The implementation of hospital health services is carried out in a plenary manner that provides inpatient, outpatient and emergency services. Plenary health services are health services that include promotive, preventive, curative, and rehabilitative.
One of the hospital services is available pharmacy services. Pharmaceutical requirements must ensure the availability of quality, useful, safe and affordable pharmaceutical preparations and medical devices. Pharmaceutical preparation services in hospitals must follow pharmaceutical service standards. The management of medical devices, pharmaceutical preparations, and consumables in hospitals must be carried out by a one-stop pharmacy installation. The price of pharmaceutical supplies at the hospital pharmacy installation must be reasonable and based on the benchmark price set by the government.
Hospitas providing pharmaceutical services not only serve inpatient services but also serve outpatient services for the community with the aim of achieving definite results to improve the quality of life of general patients and BPJS. This is stated in the Regulation of the Minister of Health Number 11 of 2016 concerning Implementation of Executive Outpatient Hospitals as supporting services, facilities and equipment.
General Hospital dr. GL Tobing Tanjung Morawa accredited type C is one of the subsidiary hospitals of PT Perkebunan Nusantara II which is required to provide the best service to patients and must be able to compete with other service providers. The Pharmacy Installation of the General Hospital of Dr. GL Tobing Tanjung Morawa has 9 outpatient general clinics with three pharmacists and assisted by several pharmaceutical technical personnel who have pharmaceutical knowledge. Outpatients are one of the types of patients served at the General Hospital dr. GL Tobing Tanjung Morawa, where the participants include general patients and patients registered in BPJS membership. Patient satisfaction in health services is very important to note because it can describe the quality of service at the health service place.
Hospital survival and income are largely determined by the number of patient visits in the hospital. Data reports of outpatient visits to pharmacy installations at the General Hospital dr. GL Tobing Tanjung The results of a survey conducted by interviewing 10 outpatients found that at the General Hospital dr. GL Tobing for the waiting time for pharmacy services in taking drugs > 60 minutes and even taking drugs can be two to three days. This situation is not in accordance with government regulations explaining that basically for pharmaceutical services the minimum waiting time for finished drug services is 30 minutes and concoction drugs is 60 minutes.
The results of research conducted by Rusdiana, et al (2015) who examined the Quality of Pharmaceutical Services Based on Prescription Completion Time in Hospitals. The results show that the completion time of a doctor's prescription for outpatients that provides the most satisfaction guarantee is less than 13 minutes based on the assurance variable and is supported by the results of the questionnaire stated in the highest score of 3.29 which agrees that the drug waiting time is not long on the Responsiveness variable. The longer the time to complete a doctor's prescription, the lower the satisfaction level of outpatients. The suggestions that can be given are increasing the number of pharmacists and expanding the pharmacy installation room so that outpatient satisfaction is increasing.
The results of research conducted by Fahrizal (2018) regarding the Analysis of the Implementation of Minimum Service Standards (SPM) for Hospitals in the Pharmacy Sector at the Pharmacy Installation of the Muara Teweh Regional General Hospital. The results of observations that have been made are known that there is no Medication errors were 100% or there was no occurrence of medication errors during the study. This is in accordance with the Decree of the Minister of Health Number 128 of 2008 concerning Hospital SPM in the pharmaceutical sector, namely the absence of 100% drug administration errors. The Pharmacy Installation of Muara Teweh Hospital applies a double check procedure in prescription services where each prescription sheet served by the Pharmacy Installation of Muara Teweh Hospital must be done by more than one officer to prevent medication errors.
Research conducted by Pitoyo (2016) explains that effective prescription work to avoid the occurrence of medication errors by conducting a prescription review in 3 stages, namely the first at the drug preparation stage, the second at the drug labeling stage and the last at the IEC administration stage and drug delivery. to the patient. The types of dispensing error cases that occurred in the outpatient pharmacy service at the hospital pharmacy where the study was conducted were the wrong drug, the wrong strength of the drug, and the wrong quantity.
Satisfaction is one of the benchmarks for a service unit in a hospital. However, the fact is that at the pharmacy service installation service there is no suggestion box and there has never been an assessment of patient satisfaction in the unit. This of course can hinder the improvement of service evaluation. In addition, another phenomenon is the problem that occurs in the pharmacy service of the dr. GL. Tobing Tanjung Morawa, namely the occurrence of delays in the distribution of drugs from drug distributors so that the availability of drugs cannot meet drug needs in pharmaceutical services.
The results of research conducted by Irene (2017) Based on some of the problems that exist in the Pharmacy Installation of the General Hospital, dr. GL Tobing Tanjung Morawa, the author is interested in conducting research with the title "The Effect of Quality of Pharmacy Installation Services on BPJS Outpatient Patient Satisfaction at the General Hospital of dr. GL Tobing Tanjung Morawa" The purpose of this study was to determine and analyze the effect of the quality of pharmacy service quality (reliability, responsiveness, assurance, physical evidence and empathy) on the satisfaction of BPJS outpatients at the General Hospital dr. GL. Tobing Tanjung Morawa.
Method
This research uses a quantitative research type, where the quantitative method used is nonexperimental, with a Cross Sectional approach or cross-sectional study. The research approach used includes explanatory research through correlational research, namely research that aims to explain the relationship between two or more variables [9]. The population in this study were BPJS outpatients who visited the Pharmacy Installation of General Hospital dr. GL. Tobing Tanjung Morawa, totaling 1230 patients. The sampling technique was carried out by purposive sampling, namely the research subjects who happened to be found in the waiting room of the pharmacy installation using the inclusion criteria of 92 patients. The data analysis used in this study was univariate, bivariate and multivariate analysis with logistic regression. In Table 1 regarding the distribution of respondents, it can be seen that of the 100 respondents observed, the majority of respondents were female as many as 72 people (72.0%) with an age range of 30-34 years, namely 36 people (36.0%), education the last high school level is 45 people (45.0%), with the type of work PNS/TNI/POLRI there are 26 people (26%) and income is below the minimum wage as many as 53 people (53.0%). Table 2 shows the results of measurements of Reliability, it is known that as many as 72 people (72%) of respondents consider the reliability of pharmacy installation services to be good. The results of the measurement of Responsiveness, it is known that as many as 71 people (71%) of respondents think that the assessment of responsiveness in pharmaceutical installation services is good. The results of the measurement of Assurance, it is known that as many as 73 people (73%) of respondents think that the assurance assessment of pharmaceutical installation services is good. The results of the measurement of Tangible evidence, it is known that as many as 71 people (71%) think that the assessment of the direct evidence (tangible) of pharmaceutical installation services is good. The results of the measurement of Empathy, it is known that as many as 60 people (60%) of respondents think that the assessment of empathy (emphaty) for pharmaceutical installation services is good. The results of the measurement of Patient Satisfaction, it is known that as many as 58 people (58%) of respondents were satisfied.
Univariate Analysis
The results of the study of 100 respondents in table 3 show the results of research on Reliability, it is known that there are 72 respondents who consider the reliability of pharmaceutical installation services to be good, with the majority of respondents also satisfied as many as 52 people (72.2%). Then, of the 28 respondents who considered the reliability not good, 22 respondents (78.6%) were also dissatisfied. Based on the results of the chi-square analysis, it can be seen that the reliability variable has a p value of 0.001 (p <0.05), it can be concluded that the independent variable reliability of pharmaceutical installation services is significantly related to patient satisfaction. The results of the research on Responsiveness found that there were 71 respondents who considered responsiveness to be good, with the majority of patients being satisfied, as many as 48 respondents (67.6%). Then, of the other 29 respondents who assessed that the responsiveness was not good, the majority of the patients were also dissatisfied, as many as 19 respondents (65.5%). Based on the results of the chi-square analysis, it can be seen that the responsiveness variable in the pharmaceutical installation service has a p value of 0.003 (p <0.05), it can be concluded that the responsiveness of the independent variable in the pharmaceutical installation service is related to significantly with patient satisfaction. The results of research on Assurance are known that from 100 respondents who were observed, there were as many as 73 respondents who considered the assurance to be good with the majority of patients being satisfied as well as 52 respondents (71.2%). Then, of the 27 other respondents who assessed that the assurance was not good, 21 respondents (77.8%) were dissatisfied.
Based on the results of the chi-square analysis, it can be seen that the assurance variable has a p value of 0.001 (p <0.05), it can be concluded that the independent variable assurance of pharmacy services is significantly related to patient satisfaction. The results of the research on Tangible Evidence show that of the 100 respondents who were observed, there were 71 respondents who considered the Tangible evidence to be good, with the majority of respondents being satisfied, as many as 54 respondents (76.1%). Then, of the 29 respondents who judged that the tangible evidence was not good, the majority of the patients were also dissatisfied, namely there were as many as 25 respondents (86.2%). Based on the results of the chi-square analysis, it can be seen that the variable direct evidence (tangible) pharmacy installation services has a p value of 0.001 (p <0.05), it can be concluded that the independent variable direct evidence (tangible) pharmacy installation services is significantly related with patient satisfaction. The results of the research on Empathy showed that out of 100 respondents who were observed, there were 60 respondents who rated the empathy of the pharmacy installation service as good, with the majority of patients being satisfied, as many as 40 respondents (66.7%). Then, out of 40 respondents who assessed that empathy was not good, the majority of patients felt dissatisfied as well, namely there were as many as 22 respondents (55.0%). Based on the results of the chi-square analysis, it can be seen that the empathy variable for pharmaceutical installation services has a p value of 0.040 (p <0.05), it can be concluded that the independent variable empathy for pharmacy service services is significantly related to satisfaction. patient. Table 4 shows that all variables have a p value <0.05. This means that all independent variables have a significant effect on the patient satisfaction variable. From the probability value it can be explained that if the quality of pharmaceutical services from the dimensions of reliability, responsiveness, assurance, tangible and empathy is good, then the chances of patients being satisfied in this study are 86.23%, the remaining 13.77%, the patient was not satisfied.
Discussion The Effect of Reliability on BPJS Outpatient Satisfaction at the General Hospital of dr. GL Tobing Tanjung Morawa
Statistically, the higher the reliability aspect of the pharmacist, the better the patient satisfaction in the hospital. It can be seen from the results that the p value of 0.019 is smaller than 0.05, meaning Ho is rejected and Ha is accepted. This means that there is an effect of reliability on BPJS outpatient satisfaction. Judging from the OR value of 4.963 which means that the better the reliability of the pharmacy installation service, the opportunity to increase patient satisfaction is 4.963 times when compared to the pharmacy installation service which has poor reliability.
The results of this study are relevant to the research of that variable reliability has a significant effect on patient satisfaction at the Pharmacy Installation Pharmacy Agency of the Luwuk Regional Hospital, Banggai Regency with a significant value of 0.000 (p value <0.05). However, it is different from the research by Saragih (2021), explaining by multivariate analysis that the reliability variable has no effect on outpatient satisfaction at the Pharmacy Installation of RSUD Engku H Daud with a coefficient of p value = 0.151.
However, in this study, some patients felt that pharmacist reliability was not good. BPJS outpatients are not satisfied with the ability of pharmacists to provide services that seem long. This is because the number of pharmacists is still limited so that this limitation hinders the acceleration process in providing drugs according to the doctor's diagnosis. Likewise, the waiting time for taking drugs is more than 30 minutes, causing the pharmacy installation service to seem slow in serving the prescriptions given by the patient due to the limited number of drug compounding personnel. In addition, job evaluations of pharmacists are rarely carried out to determine the current performance of officers who are experiencing a shortage of human resources. Based on the author's observations that the hospital only has pharmacists, namely 3 people who are assisted by 2 pharmaceutical technical personnel.
According to the author, if the hospital lacks available pharmacists, it can hamper the quality of pharmaceutical services, causing dissatisfaction of BPJS inpatients at the hospital. The need for hospital management to plan the proposed addition of pharmacists and pharmaceutical technical personnel to support pharmaceutical services and evaluate the performance of pharmacists in hospitals in the future
The Effect of Responsiveness on BPJS Outpatient Satisfaction at the General Hospital of dr. GL Tobing Tanjung Morawa
Statistically it shows that the higher the responsiveness aspect of the pharmacy staff, the better the patient satisfaction will be in the hospital. It can be seen from the results that the p value of 0.013 is smaller than 0.05, meaning Ho is rejected and Ha is accepted. This means that there is an effect of responsiveness on BPJS outpatient satisfaction. Judging from the OR value of 4.403 which means that the better the responsiveness of the pharmacy installation services, the opportunity to increase patient satisfaction is 4.403 times when compared to the responsiveness of the pharmaceutical installation services that are not good.
Relevant to Harijanto's research (2018), based on bivariate analysis there is a significant effect between the effect of service quality on patient satisfaction at the Pulmonary Hospital Pharmacy Installation with a sig (p) value of 0.048 which means < (α) 0.05 [12]. Likewise, research by Rahmawati (2016) says there is a positive and significant relationship between pharmaceutical services and the level of patient satisfaction in health services with a significant value of 0.0006 [13].
According to the author, the ineffectiveness of responsiveness of pharmacists due to the involvement of non-TTK officers in carrying out pharmaceutical duties causes patients to be dissatisfied with information regarding the rules for using the drugs given. In addition, the pharmacy installation service provides an explanation if the availability of the drug needed by the patient is not available at the pharmacy installation. Patient expectations should be explained in advance which drugs are not available at the pharmacy so that patients do not wait and immediately buy drugs at other pharmacies. For this reason, it is necessary for officers to know as early as possible the drugs that are not available at the pharmacy installation through a management information system to facilitate the pharmaceutical management process in hospitals. This is in line with the goal of the General Hospital Pharmacy Installation of Dr. G.L Tobing Tanjung Morawa, which is to make a pharmacy installation capable of providing fast, precise, and accurate pharmaceutical services according to pharmaceutical service standards supported by professional human resources.
The Effect of Guarantee on BPJS Outpatient Satisfaction at the General Hospital of dr. GL Tobing Tanjung Morawa
Statistically, it shows that the higher the aspect of pharmacy service guarantee, the better the patient satisfaction will be in the hospital. It can be seen from the results that the p value of 0.026 is smaller than 0.05, meaning Ho is rejected and Ha is accepted. This means that there is a guarantee effect on the satisfaction of BPJS outpatients. Judging from the OR value of 4.192 which means the better the assurance provided by the pharmacy installation service, the opportunity to increase patient satisfaction is 4.192 times better than the assurance that is considered not good.
These findings are in line with the results of the Rosydelia study (2015) at the Pharmacy Installation of the Tk II Hospital, dr. Soepraoen Malang city found that there is a relationship between the quality of pharmaceutical services with the level of satisfaction of outpatients (p <0.05) and the outcome of patient satisfaction with pharmaceutical services of (76.99%). Another study by Ismana (2015) concluded its findings that there is a significant relationship between assurance and patient satisfaction (p = 0.000) in Arjawinangun Hospital, Cirebon Regency.
BPJS outpatients also complain about the overall availability of supporting facilities at the pharmacy installation, especially the availability of drugs. This is related to the waiting time for patients to get the drugs that are needed at that time. According to Purwandari (2017) said that the longest delay occurs during drug delivery due to prescription work at the labeling stage that is not in the order of the queue number, the lack of employees, especially during peak hours so that the medicine will be delivered when the officer has finished his work at another stage. Then the delay also occurred because the officers waited for the medicine basket to pile up after giving the new label which was then handed over to the drug delivery desk.
Another thing that makes the patient feel bad about the guarantee is that the pharmacy staff does not ask for the address and telephone number of the patient but only focuses on the patient's name and use of the drug, so that there is no guarantee when an error occurs in the administration of the drug and does not know how to find the address and telephone of the recipient. wrong medicine. If this happens, it can cause anxiety during the process of treating the patient's illness. In accordance with the findings in the field, the results of the coefficient calculation to determine the probability of satisfaction were obtained that patients who were not satisfied with the pharmacy services at the pharmacy installation were 13.77%.
According to the author, each patient does have a different perception in assessing the guarantee of pharmaceutical services that can affect satisfaction with outpatient treatment at the hospital. In this study, although there are many tendencies of patients to be satisfied with the aspects of pharmaceutical service guarantees, if there is one or one or two points that are considered important related to the service guarantee, it can cause the patient's perception to feel dissatisfied with the service guarantee. This means that aspects that are important according to patients that can affect treatment satisfaction are not necessarily important for other patients.
The Effect of Direct Evidence on BPJS Outpatient Satisfaction at the General Hospital of dr. GL Tobing Tanjung Morawa
Statistically, the higher the direct evidence aspect of pharmacy services, the better patient satisfaction will be in the hospital. It can be seen from the results that the p value of 0.006 is smaller than 0.05, meaning Ho is rejected and Ha is accepted. This means that there is an effect of physical evidence on BPJS outpatient satisfaction. Judging from the OR value of 3.838 which means that the better the direct evidence (tangible) provided by pharmaceutical installation services, the opportunity to increase patient satisfaction is 3.838 times better than the direct evidence (tangible) which is considered not good.
The results of this study are in line with the research of Mayefis et al (2015) which states that the tangible dimension has a significant influence on patient satisfaction at Apotek X Padang City. Another study by Rensiner (2018) explains that there is a significant relationship between reliability, responsiveness, confidence, empathy and physical evidence with patient satisfaction at the outpatient polyclinic of RSUD Dr. Achmad Darwis.
The proportion of direct evidence felt by patients on pharmaceutical services in general is already good. However, some patients feel that the direct evidence is not good, especially in terms of the availability of magazines and newspapers while waiting for health services until the process of taking drugs at the pharmacy installation. According to Devani (2018) that the average patient waits to be called to the registration counter according to the queue number with a time of 13.03 minutes and the average patient waits to be examined by a doctor for a maximum of 15 minutes, a maximum total of 86.6 8 minutes.
The availability of existing facilities during the waiting time makes patients hope that they can occupy their time reading various books or magazines to ward off boredom and fatigue before getting health services until receiving drugs. According to Torry (2016) that based on the 2014 performance report at RSUD Dr. Iskak Tulungagung said that the average waiting time for outpatient services is 70 minutes, which exceeds the national minimum service standard (SPM) of 60 minutes. The inhibiting factors include incomplete supporting facilities.
According to the author, waiting time is indeed very boring for anyone, including healthy people when they are experiencing health problems. We recommend that the hospital can provide online entertainment services in the form of Wireless Fidelity (Wifi) to reduce boredom while waiting for pharmaceutical services. Online entertainment applications certainly have various kinds of entertainment that can be seen by patients.
The Effect of Empathy on BPJS Outpatient Satisfaction at dr. GL Tobing Tanjung Morawa
Statistically, the higher the direct evidence aspect of pharmacy services, the better patient satisfaction will be in the hospital. It can be seen from the results that the p value of 0.003 is smaller than 0.05, meaning Ho is rejected and Ha is accepted. This means that there is an effect of physical evidence on BPJS outpatient satisfaction. Judging from the OR value of 5,409 which contains better empathy for pharmaceutical installation services, the opportunity to increase patient satisfaction is 5,409 times better than empathy for poor pharmaceutical services.
The results of this study are similar to the research conducted by explaining that the empathy variable has a significant effect on patient satisfaction at the Pharmacy Installation Pharmacy of the Luwuk Regional Hospital, Banggai Regency with a significant value of 0.000 (p value < 0.05) (19). Another study, Kurniasih, et al (2015) concluded that the results of research on satisfaction with empathy played a role in increasing loyalty in In Health patients at Santo Yusup Hospital.
According to Kartikasari (2014) that many factors influence the perception of the quality of hospital services to increase patient satisfaction, one of which is communication including: administrative procedures, infrastructure, quality of personnel, clinical care, hospital image, hospital social responsibility, and patient trust in hospital.
Some of the main prerequisites for being a pharmacist have an empathetic attitude, namely the ability to listen or understand first before being heard or understood by the patient. Pharmacists want to understand and listen to patients first, can build the openness and trust needed in building cooperation or synergy with patients for the disease treatment process. According to Aribowo and Hartono in Evert (2020) convey the 5 Inevitable Laws of Effective Communication which are summarized in one word, namely REACH, namely: Respect (respect), Empathy (ability to listen/understand first), Audible (sound), Clarity ( clarity), Humble (humble). This attitude can be trained through various forms of communication training.
According to the author's assumption that the attitude and way of communicating pharmacists to patients is closely related to satisfaction with pharmacy services. If the staff is not friendly and attentive, it can create perceptions of patients seeking treatment at the hospital. Moreover, patients who are experiencing health problems really need more (special) attention, especially in dealing with disease complaints, both in the form of information and the availability of drugs to take so that the disease will recover quickly. So these two aspects are very important in improving the quality of pharmaceutical services. It is necessary in the future hospital management to improve the attitude and communication of pharmacy staff through regular communication training so that officers have an attitude that can feel the patient's suffering so that they are serious about providing pharmaceutical services to them.
Conclusion
The conclusion in this study is that there is an effect of reliability, responsiveness, assurance, direct evidence and empathy on BPJS outpatient satisfaction at dr. GL Tobing Tanjung Morawa. Empathy variable is the dominant factor. It is recommended that the hospital management plan the proposed addition of human resources, especially pharmacists to support pharmaceutical services in type C hospitals in the future and evaluate the performance of pharmacists.
|
2022-09-15T15:46:47.802Z
|
2022-09-11T00:00:00.000
|
{
"year": 2022,
"sha1": "bee85242bcf9925748257a94c5e0e6128745e890",
"oa_license": "CCBYNC",
"oa_url": "https://www.midwifery.iocspublisher.org/index.php/midwifery/article/download/694/636",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "6a0f4f2278f5ef38a4e03bfa97547bf94cdbeece",
"s2fieldsofstudy": [
"Medicine",
"Business"
],
"extfieldsofstudy": []
}
|
235303198
|
pes2o/s2orc
|
v3-fos-license
|
KRAS G12C Mutations in NSCLC: From Target to Resistance
Simple Summary A better understanding of the role of KRAS and its different mutations has led to the development of specific small-molecule inhibitors able to target KRAS G12C, an oncogenic driver mutation in a number of cancers, including non-small cell lung cancer. While these therapies hold great promise, they face the same limitation as other kinase inhibitors, the emergence of resistant mechanisms. The biology behind KRAS G12C inhibitor resistance has been investigated with genome-wide approaches, in the hopes of finding a way to improve the efficacy of these new molecules. Here, we review the biology of KRAS G12C, mechanisms of drug resistance and potential approaches to overcome the later. Abstract Lung cancer represents the most common form of cancer, accounting for 1.8 million deaths globally in 2020. Over the last decade the treatment for advanced and metastatic non-small cell lung cancer have dramatically improved largely thanks to the emergence of two therapeutic breakthroughs: the discovery of immune checkpoint inhibitors and targeting of oncogenic driver alterations. While these therapies hold great promise, they face the same limitation as other inhibitors: the emergence of resistant mechanisms. One such alteration in non-small cell lung cancer is the Kirsten Rat Sarcoma (KRAS) oncogene. KRAS mutations are the most common oncogenic driver in NSCLC, representing roughly 20–25% of cases. The mutation is almost exclusively detected in adenocarcinoma and is found among smokers 90% of the time. Along with the development of new drugs that have been showing promising activity, resistance mechanisms have begun to be clarified. The aim of this review is to unwrap the biology of KRAS in NSCLC with a specific focus on primary and secondary resistance mechanisms and their possible clinical implications.
Introduction
Lung cancer is the most common form of cancer (in 2018, 11.6% of all new cancer cases were lung cancer cases) [1][2][3], accounting for 1.8 million deaths globally in 2020. In Europe, lung cancer was the leading cause of cancer-related deaths in 2018 representing 18.6% of these deaths. The five-year relative survival rate for lung cancer is lower than many other leading cancer types [4].
Non-small cell lung cancer (NSCLC) can be stratified into two main histotypes. The most common is lung adenocarcinoma (60%), while the second is squamous cell carcinoma (35%). These have inherent differences, in terms of causes, clinical presentation and genomic profiles [5]. Over the last decade, the treatment and prognosis of patients with advanced and metastatic NSCLC have dramatically improved largely thanks to the emergence of two therapeutic breakthroughs: the discovery of immune checkpoint inhibitors [6] and the targeting of oncogenic driver alterations. Small molecule -kinase inhibitors (KIs) Sotorasib is an irreversible KRAS G12C inhibitor which locks KRAS in the GDPbound, inactive, state. The drug has a half-life of 6 h. Preliminary results from the phase I and II sotorasib trials showed promise in terms of RR and DOR [42]. The dose escalation found 960 mg BID to be active and safe, leading to the phase II. Among NSCLC patients, the RR was 37.2%, PFS 6.3 months and DOR 10 months [43]. A phase III trial comparing docetaxel to sotorasib in patients with KRAS G12C mutation is ongoing in the second line setting. Similarly, adagrasib, another small molecule, has been tested in the phase I-II Krystal-1. Among the 51 patients in the NSCLC cohort, there was a 45% RR. Further trials
KRAS as a Therapeutic Target
Targeted Therapy The first data showing clear activity with drugs targeting KRAS G12C have recently been released. Two molecules have undergone in human clinical trials and have reached phase II or III trials. These compounds rely on mutant cysteine for binding, disrupting Switch-I/II and converting KRAS preference from GTP to GDP, thus holding KRAS in the inactive GDP bound state and further inhibits RAF binding and consequent downstream signalling [37]. There are other drugs in development using a slightly different mechanism to inhibit cancer growth. Selective quinazoline-based compounds and guanosine mimetic inhibitors both suppress GTP loading of KRAS G12C, thereby hindering the signal for cell proliferation [35,38]. Allele-specific inhibition, trapping G12C in its inactive state, is another approach being developed [39,40]. Given their mechanism of action, the current direct, selective KRAS G12C KIs are not expected to result in substantial adverse events and the initial safety data are quite reassuring [41].
Sotorasib is an irreversible KRAS G12C inhibitor which locks KRAS in the GDP-bound, inactive, state. The drug has a half-life of 6 h. Preliminary results from the phase I and II sotorasib trials showed promise in terms of RR and DOR [42]. The dose escalation found 960 mg BID to be active and safe, leading to the phase II. Among NSCLC patients, the RR was 37.2%, PFS 6.3 months and DOR 10 months [43]. A phase III trial comparing docetaxel to sotorasib in patients with KRAS G12C mutation is ongoing in the second line setting. Similarly, adagrasib, another small molecule, has been tested in the phase I-II Krystal-1. Among the 51 patients in the NSCLC cohort, there was a 45% RR. Further trials are ongoing [44]. Other KRAS G12C inhibitors are under evaluation in different clinical trials (summarized in Table 1).
Mechanisms Underlying Resistance to K-Ras G12C Inhibitors
Despite the demonstrated activity of the first two KRAS G12C inhibitors, adagrasib and sotorasib, it is, unfortunately, equally clear that the vast majority of patients does not respond to them. Resistance to anticancer drugs can be either intrinsic or acquired. Given the lack of benefit in about 50-60% of patients, it is likely that certain subgroups of patients are intrinsic resistance to KRAS G12C inhibitors. Causal mechanisms have not been identified in vivo and only preclinical data are available. Low dependency on KRAS signalling could confer intrinsic resistance to these inhibitors [45]. KRAS dependency varies across cell models harbouring mutant KRAS, meaning that some KRAS mutant cancers might not be driven by KRAS signaling [46]. In general, tumour cell growth is mediated by the canonical MAPK/ERK and PI3K/AKT/mTORC1 signalling pathways [47] PI3K activation is not controlled exclusively by RAS, even if the RAS protein can play an important role and interact with the PI3K p110 subunit for AKT activation [48,49]. KRAS G12C inhibitors may act primarily through targeting MAPK/ERK, without affecting the phosphorylation status of AKT and mTORC1-effector pathway [39]. Therefore, parallel cell growth signalling redundancy may bypass the need for KRAS-dependent activation in cell proliferation. This could explain some inherent resistance to KRAS KIs (Figure 2a,b) [50]. Additionally, intrinsic resistance may be caused by concurrent genetic alterations that are not targeted by KRAS G12C inhibitors [51]. In KRAS G12C in vitro models, secondary KRAS mutations confer intrinsic resistance to targeted therapy by either potentiating nucleotide exchange (secondary mutations: Y40A, N116H, or A146V) or impairing inherent GTPase activity (secondary mutations: A59G, Q61L, or Y64A) (Figure 2c) [39]. As already seen in other oncogenic mutations, the mutational status of KRAS gene can be heterogeneous in the same patient, leading to mixed responses to KRAS G12C inhibition [52].
ence is difficult to explain, nevertheless the 2 studies highlight that the restoration of overall RAS activity was due to increased RTK-SHP2 activation ( Figure 3). To further improve KRAS G12C inhibition efficacy this acquired resistance pathway should be overcome. Every KI in the metastatic setting invariably induces resistance mechanisms. For instance, prolonged treatment of either RAF or MEK inhibitors, as used in melanoma, result in a rebound ERK activation due to the amplification of upstream drivers, such as RTKs and RAS [53]. Furthermore Xue et al. have described the mechanism of the rapid adaption of cancer cells to K-Ras G12C inhibitors in cell lines and that subpopulations of KRAS G12C mutant cells respond heterogeneously to KRAS G12C inhibition [54]. The adaption of cells to KRAS G12C inhibition appears independent of the activity of wild-type RAS isoforms and strongly dependent on new KRAS G12C production.
Due to this process, newly synthesized KRAS G12C was maintained in its GTP-bound state to promote cancer cell proliferation. Similarly, Ryan et al. showed a similar acquired resistance pathway, with a rapid reactivation of downstream effectors after treatment with specific KRAS G12C inhibitors [55]. Increased GTP-bound wild-type RAS (N-RAS and H-RAS) proteins were responsible for restoring MAPK activation after drug treatments, even though KRAS was maintained in its inactive state. Such an intriguing difference is difficult to explain, nevertheless the 2 studies highlight that the restoration of overall RAS activity was due to increased RTK-SHP2 activation (Figure 3). To further improve KRAS G12C inhibition efficacy this acquired resistance pathway should be overcome. Some of the intrinsic resistance to KRAS targeting agents identified in clinical practice as well as in preclinical models could be explained by the lack of dependency of some KRAS mutant tumours on KRAS signalling. This could stem from the differing ways in which RAS proteins activate downstream signalling. These pathways include the MAPK/ERK, as well as the PI3K/AKT/mTOR pathways. The latter's activation does not Some of the intrinsic resistance to KRAS targeting agents identified in clinical practice as well as in preclinical models could be explained by the lack of dependency of some KRAS mutant tumours on KRAS signalling. This could stem from the differing ways in which RAS proteins activate downstream signalling. These pathways include the MAPK/ERK, as well as the PI3K/AKT/mTOR pathways. The latter's activation does not depend solely on RAS signaling [56]. In KRAS mutant pancreatic ductal adenocarcinoma and lung adenocarcinoma, for example, it has been demonstrated in cell lines that the dependence on RAS signalling varies tremendously [46]. As such, even with effective complete KRAS inhibition, some KRAS-mutant pancreatic ductal adenocarcinoma cells survive and thrive. Upon further analysis, the majority of these cells have MAPK signalling which is PI3K-dependent, which should confer therapeutic sensitivity to inhibitors of the MAPK pathway [50]. Another mechanism allowing the bypassing of KRAS inhibition in preclinical cancer cell lines is the amplification of a transcriptional coactivator, YAP1 [57].
In addition to the diverse mechanisms of intrinsic resistance to KRAS targeting drugs, acquired resistance frequently emerges. In the era before direct KRAS inhibitors, this phenomenon was often associated with resistance to therapy targeting various steps of the downstream signalling pathway [11]. MEK is among the most common targets in the MAPK pathway, but MEK inhibitors have been largely disappointing in this context, proving to be of a very limited activity in patients with lung adenocarcinoma harbouring KRAS mutations [58]. Given the early promise of finally targeting KRAS indirectly through this approach, there was a randomised controlled trial, the SELECT-1 trial, which compared docetaxel alone to a combination with selumetinib, a MEK inhibitor in patients with KRASmutant lung adenocarcinoma. In spite of the large study population of 510 patients, there was no significant difference in either progression-free survival or overall survival [59]. In a similar phase 2 trial involving trametinib, another MEK inhibitor, there was no survival benefit of the combination with docetaxel over docetaxel alone among previously treated patients with KRAS-mutant NSCLC [60].
On a biological level, the resistance to downstream blockage is likely due to a bypassing of the blocked pathway by the activation of alternative, parallel, RAS dependent pathways. Furthermore, it has been observed that MEK inhibition downregulates normal negative feedback mechanisms, inducing an upregulation of receptor tyrosine kinases (RTKs) upstream [61]. This mirrors a resistance mechanism observe when targeting BRAF V600E mutated cancers with direct BRAF inhibitors. Here, downregulating the negative feedback mechanisms causes EGFR-driven activation of parallel pathways including CRAF and RAS. This phenomenon has been studied intensely in the context of the aggressive BRAF V600E mutant subset of colorectal cancers. Upon targeting BRAF in these diseases, there is a higher level of EGFR than in melanoma cells [62]. To compensate this EGFR-mediated resistance, a recent approach has been to associate BRAF, MEK and EGFR inhibitors simultaneously in colorectal cancer, inducing a higher response and overall survival than standard therapy [63]. There is still significant room for improvement in this domain, but it shows that a better understanding of the mechanisms of resistance to therapy could allow physicians to ultimately tailor the treatment to extend the benefit for patients.
The loss of wild-type KRAS also plays a role in the sensitivity to MEK targeting agents in KRAS-mutant cell lines. Wild-type KRAS appears to promote resistance to MEK inhibition, possibly due to its tendency to form dimers with mutant KRAS [64]. Dimerisation is a necessary step in the activation of KRAS, including oncogenic signalling. Therefore, the loss of wild-type KRAS and subsequent decrease in dimerisation between it and the mutated variants may play a role in preventing the carcinogenic potential and overall function of mutant KRAS [64].
While KRAS G12C inhibition is recent, the emergence of acquired resistance has already been documented. In preclinical models, KRAS G12C mutant lung adenocarcinoma cell lines treated with the ARS-1620 inhibitor displayed varying degrees of MAPK pathway reactivation. As we discussed previously with regards to inhibitors of downstream Ras signalling, targeting the KRAS RTK itself was effective in some cell lines, while targeting the PI3K pathway directly provided greater inhibition in others [65]. The heterogeneous efficacy of KRAS G12C inhibition by ARS-1620 is explained by the identification of distinct subgroups within the cell lines, each with their own response upon exposure to the drug. Most cell lines became quiescent, entering the G0 state when treated with ARS-1620. However, some rapidly regained their RAS signalling activity and restarted proliferating. This RAS signalling reactivation appears to be the result of novel KRAS G12C production, stemming from the reduction in MAPK signalling. The newly formed KRAS G12C remains in its activated, GTP bound form thanks to the influence of EGFR and SHP2 signalling. In this state, it is insensitive to KRAS G12C blocking drugs. In order for KRAS G12C to escape from the quiescent, drug-induced G0 state, Aurora kinase A (AURKA) and the downstream CRAF also play a role by stabilising active KRAS [54].
While the production of new active KRAS G12C appears to be implicated in resistance to targeted therapy and persistent cell proliferation, another possibility is an adaptive wild-type RAS response to G12C inhibition. Under pressure from KRAS G12C inhibitors, a feedback loop can stimulate RTKs, leading to the activation of HRAS and NRAS, and ultimately, independent KRAS G12C signalling. On a biological level, it appears as though no single RTK activity was required for signalling in every KRAS G12C model. However, the co-inhibition of the SHP2 phosphatase had a broad efficacy in all models and led to the inhibition of the above-mentioned feedback reactivation mechanism [66]. SHP2 plays a role in mediating proliferative signalling between a number RTKs and the RAS pathway. Therefore, it has become an attractive target for combination therapies with KRAS G12C inhibitors to attempt to increase both primary efficacy and duration of response. This combination is underway in a phase I/II clinical trial with adagrasib (ClinicalTrials.gov Identifier NCT04330664), and other early phase trials are also exploring SHP2 inhibitors. Both RTK and SHP2 appear to play a significant role in acquired resistance to KRAS G12C inhibition and the best approach to targeting these is yet to be clear [67].
Resistance to KRAS G12C Inhibitors in Patients
The precise mechanism of resistance in patients with cancer treated with KRAS G12C inhibitor is unclear at the moment. It is likely that intrinsic and acquired resistance may co-exist and be intertwined in the same patient treated with KRAS G12C -targeted therapies [45]. A number of co-occurring mutations may contribute to adaptive resistance across KRAS mutant cancers [51]. Molecular alterations, such as TP53 (tumor protein p53), CDKN2A (cyclin-dependent kinase inhibitor 2A), STK11 (serine/threonine kinase 11), KEAP1 (Kelch-like ECH-associated protein 1), might play an important role to explain the heterogeneity of response, however limited data are available on their role in predicting adaptive resistance [55]. The presence of co-occurring mutations, namely KEAP1 or STK11, could affect the efficacy of the KRAS G12C inhibitor, sotorasib. Strong conclusions cannot be drawn due to the small number of patients analysed in the Codebreak100 tria [43].
Recently, Tanaka et al. [68] described a patients with KRAS G12C-mutant NSCLC who developed polyclonal acquired resistance to adagrasib with the emergence of 10 heterogeneous resistance alterations in serial cell-free DNA. The alterations spanned four genes (KRAS, NRAS, BRAF, MAP2K1), all of which converge to reactivate RAS-MAPK signalling. They identified a de-novo KRAS Y96D mutation affecting the switch-II pocket, able to interfere with key protein-drug interactions and confer resistance to KRAS-G12C KIs in engineered and patient-derived KRAS G12C cancer models. Interestingly, a novel, functionally distinct tri-complex KRAS G12C active-state inhibitor, RM-018 retained the ability to bind and inhibit KRAS G12C/Y96D and could overcome resistance.
An Important contribution to the understanding of resistance mechanisms was provided by Koga et al., [69] who generated 142 Ba/F3 clones resistant to either sotorasib or adagrasib. Thereof, 124 (87%) harboured secondary KRAS mutations, comprising 12 distinct KRAS mutations including Y96D/S, resistant to both inhibitors. The combination of a novel SOS1 inhibitor, BI-3406, and trametinib showed promising activity against this specific acquired resistance. Furthermore, the G13D, R68M and A59S/T mutations were highly resistant to sotorasib but remained sensitive to adagrasib. Conversely, KRAS Q99L was resistant to adagrasib but sensitive to sotorasib.
Genomic analyses including co-existing alterations should be documented in future therapies and trials to optimize treatment choices. Several studies are ongoing combining KRAS G12C inhibitors with other compounds (Table 1) to overcome resistance and improve clinical benefit.
How to Overcome Vertical Signal Resistance Pathways
Targeting KRAS induces a reduction in the ERK-mediated negative feedback loop, thus upregulating RTK expression. This upregulation then participates in reactivating new RAS signalling, promoting cell proliferation through SHP2 mediated RAS activation, in spite of upstream drug-induced KRAS inhibition [70]. The RAS reactivation's dependency on SHP2 suggests that combining a KRAS and SHP2 inhibitor could lead to greater anti-proliferative efficacy [54]. In preclinical models, this has been tested. Using afatinib or erlotinib to prime cells potentiated the subsequent efficacy of the KRAS G12C inhibitor, ARS-853 [39,40]. Concurrent fibroblast growth factor receptor (FGFR) inhibitors or inhibitions of tyrosineprotein kinase Met (c-MET), were found to be more effective in inhibiting cell growth in vitro, compared to targeting KRAS G12C alone [71].
The potential synergy of targeting multiple RTKs to improve RAS inhibition is inconsistent in different preclinical models. It also highlights the challenge of knowing which RTK plays a significant role in RAS reactivation under therapeutic pressure [55]. Furthermore, RTK-associated phosphatase SHP2 might represent a possible targetable RTK signalling node and has been explored [55]. RTKs activates RAS in non-tumoral cells by recruiting the SHC-GRB2-SOS1 complex independently of SHP2 [72]. In cancer cells harbouring KRAS G12C mutations, SHP2 inhibition triggers a senescence response in vivo and in growth factor-limited conditions [73]. In melanoma, targeting RAS downstream effectors with MEK inhibitors on their own triggers the relief of ERK-mediated feedback inhibition of RTK signalling. This causes RAS reactivation which is SHP2 dependent [74]. The mechanism through which SHP2 activates RAS is still unclear. Combining KRAS G12C and SHP2 inhibitors has demonstrated improved efficacy in preclinical and animal models [54]. Similarly, in another preclinical model, the use of the KRAS KI, adagrasib, combined with RMC-4550, a direct SHP2 inhibitor, improved the inhibition of RAS signalling and anti-proliferative efficacy compared to adagrasib alone. Perhaps what is most interesting about this approach is that the increased efficacy was detected in models sensitive to adagrasib, as well as rendering refractory models sensitive to the combination therapy [75]. Of course, as with all therapeutic combinations, toxicity could be a limiting factor for use in patients.
Another possible target is SOS1, the guanine nucleotide exchange factor that activates KRAS [76]. Some SOS1 inhibitors have been investigated and BAY-29 showed high efficacy once combined with ARS-853, a KRAS G12C inhibitor, to inhibit Ras activation and cell proliferation by disrupting RAS-SOS1 interactions [77].
A step further to overcome resistance and increase treatment efficacy may involve upstream targeting. While it would not induce specific inhibition of steps of the downstream pathway, it would limit the activation of parallel signalling pathway, likely resulting in fewer side effects and more tolerable treatment [78]. Blocking any of a number of signalling pathways including PI3K, EGFR and FGFR appears to provide a synergistic anti-proliferative effect with KRAS G12C inhibition in vitro [39,65]. The biological rationale behind the potential efficacy of a combination of a KRAS G12C inhibitor and PI3K inhibitor is that they could decrease phosphatidylinositol (3,4,5)-trisphosphate (PIP3)-bound GAB adaptor proteins. The latter are involved in ERK reactivation, thus by decreasing both the ERK and PI3K signalling cascades, cell growth inhibition could be potentiated [65]. Lee et al. published an experiment pertaining to a potential new therapeutic combination involving a KRAS G12C inhibitor, ARS-1620, and alisertib, an AURKA inhibitor. The latter has been shown to overcome resistance to KRAS inhibition is some ARS-1620 re-fractory tumours [54]. AURKA is involved in the regulation of cell cycles. It is a mitotic serine/threonine kinase that exerts its inhibitory effect by blocking the interactions between CRAF and KRAS, suppressing ERK signalling, and inhibiting cell growth [54].
Combining Current Treatments in KRAS G12C Mutant Cancers
Chemotherapy remains an integral treatment for patients with cancer, in particular lung cancer. It is routinely associated with immune checkpoint inhibitors in first-line [6] It is therefore of interest to explore whether the combined use of standard-of-care chemotherapy or immunotherapy with a KRAS G12C inhibitor could be synergistic. In advanced KRAS G12C mutant NSCLC, sotorasib and adagrasib have been assessed administered concomitantly with carboplatin or palbociclib, [75,79]. The combination of carboplatin and sotorasib resulted in significant tumour regression in xenograft mouse models [75,79]. Similarly, adagrasib combined with palbociclib, a CDK4/6 inhibitor approved in breast cancer, showed anti-proliferative effects in adagrasib-resistant models. The authors hypothesize that curbing the retinoblastoma protein (Rb)/E2F transcription factor (E2F) signalling [75] explains the efficacy of this combination.
In vitro data suggest that KRAS mutant cancers are immunosuppressive, as oncogenic KRAS signalling can induce the expression of immunomodulatory factors [80]. It is likely that KRAS inhibition could convert the immunosuppressive tumour microenvironment into one that favours antitumor immune responses. Treatment with high dose sotorasib showed durable tumour regression in immunocompetent mice. On the contrary, in immunocompromised mice, tumours rapidly progressed after a short response [79]. In KRAS G12C models, similar pre-clinical data were seen with adagrasib, which enhances antigen presentation, and stimulates the tumour immune microenvironment [81].
Conclusions
Cancers harbouring KRAS mutations comprise a very heterogeneous selection. Both the biology of KRAS-driven diseases and their sensitivity to small molecule KIs are influenced by the type of KRAS variant and presence of co-mutations. Recently, sotorasib and adagrasib have shown clinical activity in NSCLC. Their efficacy is unprecedented for KRAS G12C targeting agents; however, enthusiasm must be tempered by resistance mechanisms. Both upstream and downstream strategies have been explored to overcome this resistance and enhance the efficacy of KRAS G12C inhibitors. Off target therapy like chemotherapy might prove to be a fruitful combination with these KIs. Similarly combining KRAS inhibitors with immune checkpoint inhibitors could improve their efficacy by modulating the tumour microenvironment and increased the sensitivity to checkpoint inhibitors. Potential co-mutations known to affect the immune microenvironment, such as STK11/KEAP1, are known to reduce the benefit derived from immune checkpoint inhibitors in KRAS-mutant NSCLC. Their role, and that of other concurrent mutations including TP53, remains unclear and further studies are needed to clarify their prognostic and or predictive role in KRAS inhibition.
Finally, while the therapeutic landscape has already dramatically changed, combination trials are ongoing. Understanding the biology behind resistance mechanisms is the best way forward to optimize therapies offered to patients.
Author Contributions: All the authors contributed equally to the manuscript. All authors have read and agreed to the published version of the manuscript.
Funding: This research received no external funding.
Conflicts of Interest:
A.A. has received personal fees from Bristol-Myers Squibb, AstraZeneca, Roche, Pfizer, Merck Sharp and Dohme, and Boehringer-Ingelheim for work performed outside of the current study. G.L.B. has received personal fees from Boehringer, Janssen-Cilag, and Roche for work performed outside of the current study. A.F. has received personal fees from Bristol-Myers Squibb, Roche, Pfizer, Merck Sharp and Dohme, and Astellas for work performed outside of the current study.
|
2021-06-03T06:17:16.224Z
|
2021-05-21T00:00:00.000
|
{
"year": 2021,
"sha1": "b857a053f376e1660f04dfbce27b1c43692eba12",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2072-6694/13/11/2541/pdf",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "413c9ac4649b182da97119874a00e14063612393",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
15486010
|
pes2o/s2orc
|
v3-fos-license
|
Weight-Perception-Based Novel Control of a Power-Assist Robot for the Cooperative Lifting of Light-Weight Objects
We developed a 1-DOF power assist robot system for lifting objects by two humans cooperatively. We hypothesized that weight perception due to inertia might be different from that due to gravity when lifting an object with power-assist because the perceived weight differs from the actual weight. The system was simulated and two humans cooperatively lifted objects with it. We analyzed human features such as weight perception, load forces, motions etc. We found that the robot reduced the perceived weights to 25% of the actual weights, and the load forces were 8 times larger than the actual requirements. The excessive load forces resulted in excessive accelerations that jeopardized the performances. We then implemented a novel control based on the human features, which was such that a virtual mass exponentially declined from a large value to a small one when subjects lifted objects with the robot and the command velocity exceeded a threshold. The novel control reduced excessive load forces and accelerations and thus enhanced performances in terms of maneuverability, safety etc. The findings may be used to develop power assist robots for manipulating heavy objects in industries that may augment human's abilities and skills and may improve interactions between robots and users.
Object Manipulation in Industries
Manipulation of heavy objects is common and necessary in industries and households such as agriculture, forestry, mining, construction, manufacturing and assembly, transport and logistics, military, disaster and rescue operations, meat processing etc. Manual manipulation of heavy objects is very tedious, causes health problems (e.g., back pain, injuries) to humans and restricts work efficiency [1]. On the contrary, autonomous systems usually do not provide required flexibility in object manipulation [2]. Hence, we argue that suitable power assist robots may be conveniently and efficiently used for handling heavy objects in industries because a power assist robot reduces perceived heaviness of lifted objects through its assistance and also augments human's abilities and skills in object manipulation [3]. However, such robots are not available in practices in industries.
Present Applications of Power Assist Robots
Power assist robots assist humans perform tasks by augmenting human's abilities and skills. This type robot was first conceived in early 1960s with the invention of "Man-amplifier" and "Hardiman".However, the progress of research on this field is not so satisfactory [3]- [5]. Power assist robots are now confined to a few applications such as healthcare, rehabilitation etc. [6]- [10]. Few power assist systems are available for other applications e.g., assisted slide doors for automobiles [11], assist for lifting baby carriage [12], assist for workers in agricultural jobs [13], hydraulic assist for automobiles [14], power assist control for bicycle [15], skill-assist in manufacturing [16], assist for sports training [17], assisting carpentry workers [18], assist for horse training [19] and so forth. However, suitable power assist systems for handling heavy objects in industries are not seen.
Robot-Assisted Object Manipulation
A few power assist devices are available for handling objects [20]- [25]. However, most of them have not been designed targeting the manipulation of heavy industrial objects. They have also limitations. For examples, human features are not included in control, the system is itself heavy, amount of power assistance is unclear, the system is not evaluated properly for safety, maneuverability, efficiency etc. [22]. Again, the system has disadvantages of pneumatics, hydraulics etc. [22]. The system generates excessive power [23].Operator's intention is not reflected in control, and the system generates vibrations [24]. Human force is not measured directly and separately, the system restricts movement due to constraints, there are difficulties in path planning, object handling speed is slow etc. [25].We think that these systems are not suitable for lifting heavy objects in industries because they are not sufficiently safe, natural, stable, easy and human-friendly. Few power assist systems are commercially available such as HAL [41], PLL [42] etc., but they are not suitable for manipulating heavy and large industrial objects due to their configuration, self-weight etc.
Moreover, there are several common issues with power assist systems e.g., actuator saturation, noises, disturbances, user adjustment, selection of appropriate control methods, accuracy, capacity, number, configuration, and sensitivity of force sensors, number of degrees of freedom, stability etc. that should be addressed.
However, the conventional assist systems do not adopt any holistic approach to address these issues.
Other types of robot systems may be available for object manipulation. But, they are not power-assist systems, and hence, power-assistance cannot be obtained from them [26]. These robots are also not designed targeting industrial object manipulation.
Requirements for Power-Assisted Object Manipulation
We think that the requirements for power assist systems for manipulating heavy objects in industries are that the systems should ensure (i) optimum perceived heaviness, (ii) optimum manipulative forces, (iii) optimum motions, maneuverability, stability, safety, naturalness, ease of use, comfort (absence of fatigue), situational awareness of user, efficiency, manipulating speed etc., (iv) flexibility to adjust with objects of different sizes, shapes, mass etc., (v) necessary DOFs in object manipulation such as vertical, horizontal, rotational, (vi) adjustment with worst-cases, uncertainty, change of situations, disturbances etc., (vii) fulfillment of operator's biomechanical needs etc [3], [38]. However, we do not see any initiatives that fulfill these requirements entirely.
Weight Illusion in Power-Assisted Object Manipulation
A power assist robot reduces the perceived weight of an object lifted with it [3]- [5]. Hence, the manipulative forces required to lift the object with power-assist should be lower than that required to lift the object manually [27]. However, the human cannot correctly perceive the weight of object before lifting it with the robot and eventually applies excessive load force (vertical lifting force). The excessive load force results in sudden increase in acceleration, fearfulness of the human, lack of stability and maneuverability, injuries, fatal accidents etc. However, the existing power assist systems do not consider the weight perception issue [20]- [25].
Cooperative Manipulation of Objects
In industries, workers use one hand (unimanual) or two hands (bimanual) to handle objects and sometimes two or more workers handle objects cooperatively. Workers decide grasping and manipulation method on the basis of object's size, mass, shape etc. as well as of task requirements [28]- [30]. We assume that weight perception, load forces and motions for one method may be different from that for others, and the differences may affect the control performances. We also assume that out of three manipulation methods (unimanual, bimanual, cooperative), the cooperative manipulation may be the most beneficial because this method may provide advantages over others in terms of perceived weights and load forces (cooperative manipulation produces least perceived heaviness and load force) [28]. Again, the cooperative manipultion may be the most suitable when manipulating large size and intricate shape objects.
Few works addressed manipulation of a single object with a robot by a single human [20]- [25]. Cooperative manipulation of a single object by two robots was also studied [31]. Handling an object by two hands of a single human was investigated [29].However, cooperative manipulation of an object with power-assist by two or more humans is not found though this type manipulation is very necessary in industries and households.
Objectives of the Paper
Hence, we see that it is necessary to have a model of power assist system for lifting objects. Control of the system should include weight perception, load forces and motion features to make it human-friendly.Again, as the ccoperative lifting is to be the most beneficial, the model should be based on lifting an object by two humans cooperatively. However, such model has not been proposed yet. We took an initiative to study cooperative lifting of objects with power-assist by two humans [32]. However, the study was neither complete nor exclusive for cooperative manipulation of objects.
Hence, the objective of this paper was to model a power assist system for cooperative lifting of objects with it by two humans, and to design and implement a weightperception-based novel control strategy to improve its performances.We developed a 1-DOF power assist system for lifting objects. We included weight perception in robot dynamics. The system was simulated and two humans cooperatively lifted objects with it. We critically analyzed human features such as weight perception, load forces and object motions. We then implemented a novel control scheme that reduced excessive load forces and accelerations and thus enhanced performances in terms of maneuverability, safety etc. Then we proposed to use the findings to develop power assist robots for handling heavy objects in industries that might help fulfill the requirements, would augment human's abilities and skills and would improve interactions between robots and users. This paper does not state that no system is existed for manipulation of heavy industrial objects, rather the objective is to propose a power assist system that (i) specializes for cooperative handling of very large and heavy objects, (ii) overcomes the limitations of the existing systems, (iii) fulfills all the requirements of object manipulation, (iv) improves human-friendliness, safety etc. through inclusion of human features in control design etc.
Experiment System Design
We developed a 1-DOF (vertical up-down motion) power assist system using a ball screw actuated by an AC servomotor (type: SGML-01BF12, made by Yaskawa, Japan). The servomotor and the ball screw were coaxially fixed on a metal plate and the plate was vertically attached to a wall, which is shown in Fig.1 (a). We made three rectangular boxes by bending aluminum sheets (thickness: 0.5 mm). These boxes were lifted with the power assist robot system and they were called the power-assisted objects (PAOs). The dimensions (length x width x height) of the boxes were 6 x 5 x 16cm, 6x 5x 12cm and 6 x 5 x 8.6cm for the large, medium and small size respectively. Top of each box was covered with a cap made of aluminum sheet (thickness: 0.5 mm).The bottom and back were open. Self-weight of each box was about 13g on average. A force sensor (foil strain gauge type manufactured by NEC Ltd., Japan) was tied to the ball nut of the ball screw. As shown in Fig.1 (b), an object (box), at a time, could be tied to the force sensor through an object holder and be lifted by a human.
We also made three 'manually lifted objects' (boxes) (MLOs) of different sizes (small, medium, large) as shown in Fig.2. The MLOs were lifted manually and were not physically connected to the power assist system. The shape, dimensions, material and outlook of a MLO of a particular size were same as that of the PAO of that particular size. However, it was possible to change the weight of the MLO by attaching extra mass to its back while keeping its front view unchanged.MLOs were used as reference weights for estimating the perceived weights of the PAOs called the power-assisted weights (PAWs). The complete setup of the experimental power assist system is shown in Fig.3. Figure 4 shows the final arrangement for the experiments for cooperative lifting of objects with the system. The PAO tied to the force sensor is to place on the soft surface of a
Weight-Perception-Based Dynamics
According to Fig.4, the targeted equation of motion for lifting a PAO is (1).
Where, � � � Resultant load force applied by two humans � � Actual mass of PAO visually perceived by humans � � � Desired displacement of the PAO � � Acceleration of gravity As an attempt to introduce weight perception in dynamic modeling, we hypothesized (1) as (2), where � � � � � � �,� � � �� � � � �, and hence � � �� � � � � �. Both � � and � � stand for mass. � � forms inertial force and � � forms gravitational force. A difference between � � and � � is assumed due to the difference between perception and reality regarding the weight of the object lifted with the power assist robot. The human errs when lifting an object with the power assist robot because the human considers that the actual and the perceived weights are equal. However, the perceived weight is less than the actual weight. The hypothesis means that the human errs because he/she considers that the two 'masses' used in inertia and gravity forces are equal to the actual mass of the object (�. e. � � � � � � � ��. We assume that, in order to realize a difference between actual weight and perceived weight, the human needs to think that the two 'masses' used in inertia and gravity forces are different and are less than the actual mass. It means that the dynamics should consider m � � � � � �, � � � �� � � � �. However, it would be the challenge to optimize the values of m1 and m2 to produce satisfactory feelings in humans when lifting objects with the robot by humans. We then derived (3) ~ (5) based on (2).
Control System Design
We then diagrammed the control based on (3)~(5), as shown in Fig.5. If the system is simulated using Matlab/Simulink in the velocity control mode of the servomotor, the command velocity (�� � ) to the servomotor is obtained by (6), which is fed to the servomotor through a D/A converter. The servodrive generates the control law based on the error displacement (xd-x) following the velocity control with position feedback. The controller is assumed to be common to two hands of two subjects because each trial is targeted to be in-phase, symmetric and synchronized. The resultant of the load forces of two hands of two subjects and their cross-talks are to represent a common command. However, it is possible to design separate, but interacting controllers for each hand of each subject [28].
We think that the following parameters for the control in
Subjects
Ten male engineering students aged between 22 and 31 years (Mean=23.40 years, S.D. =2.6077) were selected to voluntarily participate in the experiment. The subjects were right-handed, physically and mentally healthy.
Design of the Experiment
The independent variables were m1 and m2, and visual object size. Dependent variables were perceived weights (PAWs), peak load forces (PLFs), and object motions.
Experiment Procedures
The system shown in Fig.5 System performances were expressed through several criteria such as motion, object mobility, naturalness, stability, safety, ease of use etc., and in each trial the subjects subjectively evaluated (scored) the system using a 7-point bipolar and equal-interval scale as follows: 1. Best (score: +3) 2. Better (score: +2) 3. Good (score: +1) 4. Alike (score: 0) 5. Bad (score:-1) 6. Worse (score:-2) 7. Worst (score:-3) All subjects conducted this experiment for small, medium and large objects separately. We recorded load force, motion (displacement, acceleration), PAW and evaluation data for each trial separately. Figure 4 shows the experimental procedures. The subjects did not suffer from fatigue as there was sufficient rest and refreshment between trials.
Psychophysical Relationship between Actual and Power-Assisted Weights (PAWs)
We calculated the mean PAW for each � � and � � set for the small, medium and large object separately. Then, we drew graph for each size object separately taking the simulated gravitational mass (� � ) of twelve � � and � � sets as the abscissa and the mean PAWs for twelve � � and � � sets as the ordinate. Here, � � value was assumed as the actual weight of the PAO. The relationship between the actual weights and the PAWs for the large size object is shown in Fig.6.The relationships for the medium and small size objects were almost the same as that for the large size object. We see in the figure that the PAW is 0.125 kg for all m1 values when the actual weight is 0.5 kg. Again, the PAW is 0.25 kg for all m1 values when the actual weight is 1.0 kg, and so on. We thus estimated that the PAW was 25% of the actual weight. The figure shows that humans do not feel the change in m1 i.e., m1 do not affect PAWs. Analyses of Variances, ANOVAs (visual object size, subject) on PAWs for each m1 and m2 set showed that variations due to object sizes were not significant (F2,18<1 for each m1 and m2 set).The reason may be that subjects estimated PAWs using haptic cues where visual cues of objects had no influences.
Variations among subjects were also found statistically insignificant (F9,18<1 for each m1 and m2 set) [27]. Figure 7 shows the time trajectories of object's displacement and acceleration, and load force for a typical trial. We then derived the velocity for each trial based on the displacement time trajectory of Fig.7 following (7) and determined their means for each object size separately.
TID TPD
In (7), MPD stands for magnitude of peak displacement, MID stands for magnitude of initial displacement, TPD stands for time corresponding to peak displacement and TID stands for time corresponding to initial displacement. We also derived the magnitude of peak acceleration for each trial based on the acceleration time trajectory of Fig.7 and determined their means for each object size separately. Mean velocity and mean peak acceleration for different sizes of objects are shown in Table 2. Results show that velocity and peak acceleration are proportional to object sizes [27]. Results also show that the accelerations are very large.
Analyses on Load Forces and Determination of Excess in Load Forces
Based on the time trajectory of load force in Fig.7, we derived the magnitude of peak load force (PLF) for each trial and determined the mean PLFs for each m1 and m2 set for each object size separately as shown in Table 3. We then plotted graph taking the m1 values of the twelve m1 and m2 sets as abscissa and the mean PLFs for the twelve m1 and m2 sets for the three objects as ordinate and thus determined the relationships between m1 and PLFs as shown in Fig.8 We see in Table 3 that the lowest load forces were applied for the smallest values of m1 and m2 i.e., for m1=0.5, m2=0.5. We assumed that m1=0.5kg, m2=0.5kg might be the best amongst all twelve sets of m1 and m2 [33]. On the other hand, the actually required PLF to lift a PAO should be slightly larger than the PAW at m1=0.5, m2=0.5 [27], which is 0.125kg or 1.22625 N (Fig.6). We compared the PAWs (Fig.6) to the PLFs (Table 3) for the large, medium and small objects for m1=0.5, m2=0.5 and determined the excess in PLFs following (8) as shown in Fig.9. The results show that, on average, operators applied 8.003 times larger than the actually required PLFs. We also see that the magnitudes of PLFs as well as the excess in PLFs are proportional to object sizes [27].
Excess in PLF = PLF-PAW We determined the mean evaluation scores for each size PAO for m1=0.5, m2=0.5. The detailed results will be presented later. The results showed that the system performances were not very satisfactory. We assume that the excessive PLFs produced excessive accelerations that in turn resulted in less satisfactory performances.
7. Experiment 2: Novel Control to Improve the System Performances Experiment 2 was conducted to reduce the excessive load forces by applying a novel control technique. The novel control was such that the value of m1 exponentially declined from a large value to 0.5 when the subjects lifted the PAO with the robot and the command velocity of (6) exceeded a threshold. We see in Fig.8 that the load force magnitudes are linearly proportional to m1 and subjects do not feel the change of m1 (Fig.6). This is why, the reduction in m1 would also reduce the PLF proportionally. Reduction in PLF would not adversely affect the relationships in (2) because the subjects would not feel the change of m1. We used the following equations for m1 and m2 to modify the control of Fig.5. The digit 6 in (9) was determined by trial and error because the applied PLFs were over 6 times larger than the actually required PLFs (Fig.9). The novel control is illustrated in Fig.10 as a flowchart.
Results of Experiment 2
We determined mean PLF for each size PAO for the modified control of experiment 2 (after control modification) and compared them to that determined in experiment 1 for lifting objects at m1=0.5 and m2=0.5 (before control modification). The results are shown in Table 4. Results show that the novel control strategy reduced PLFs significantly.
Mean peak accelerations for different object sizes after the control modification are shown in Table 5. The results show, if we compare these to that obtained before the control modification, that peak accelerations significantly reduced due to control modification. The reason may be that the reduced PLFs after the control modification had reduced the accelerations. On the other hand, velocity slightly reduced due to control modification. It means that the control modification made the system slightly slower. However, reduction in system velocity was very small and it did not affect performances as follows. We determined mean PAWs for each size object separately after the control modification and compared them to that derived in experiment 1 for m1= 0.5 and m2=0. 5. The results are shown in Fig.11. The figure shows End Start that mean PAWs were unchanged even though m1 reduced exponentially due to the control modification. It indicates that the control modification did not adversely affect the relationships of (2).
We determined the mean evaluation scores for each size PAO for experiment 2 and compared them to that for experiment 1. The results are shown in Table 6 for the medium size object. The results show that the novel control strategy of experiment 2 improved performances through reducing excessive PLFs and accelerations and the performances are quite satisfactory. Performances in experiment 1 are also more or less satisfactory though these are much inferior to that in experiment 2.Performances in experiment 1 are satisfactory because the control scheme (Fig.5) used for experiment 1 was also a novel control as weight perception was included there. Control scheme (Fig.10) used in experiment 2 is an improvement of the control scheme used for experiment 1 and hence it resulted in better performances.
The subjects felt reduced gravity for cooperative lifting because the gravity was shared by two subjects [29]. Synchronization between two subjects in cooperative lifting might be slightly less perfect, which might affect the performances slightly [28].We think that the satisfactory performances have been produced due to the combined effects of the appropriate values of m1 and m2, and of the application of the novel control. However, the performances may be further optimized by optimizing the values of m1 and m2 in (9) and (10) respectively. Table 6. Mean performances evaluation scores with standard deviations (in parentheses) for the medium size object before (expt.1) and after (expt.2) control modification We see in Table 6 that for experiment 2, motion and mobility got the same scores. It means that motion and mobility are interrelated and changes in one may affect the other. It means, good motion produces good mobility and vice versa. Similarly, (i) stability and safety, and (ii) naturalness and ease of use are interrelated. It means that a stable system is safe and a safe system is stable-at least for our case. Similarly, an easy to use system is natural and a natural system is easy to use.
ANOVAs showed that evaluation scores were not affected by visual object sizes. The reason may be that the subjects evaluated performances using haptic cues where visual cues of objects had no influences [27]. Variations among subjects were also found statistically insignificant (F9,18<1 for each case).
We also conducted ANOVAs (object size, subject) on peak load force, peak velocity and peak acceleration for experiments 1 and 2 separately. We found that variations between object sizes were significant (p<0.01 at each case).
On the other hand, variations between subjects were not significant at each case (p>0.05 at each case). Hence, the results may be used as a general model. However, the generality may be increased if we increase the number of trials, object sizes, shapes, subjects (including end-users e.g. factory people), experiment protocols etc.
Effectiveness and Accuracy of the Findings
Effectiveness of the proposed robotic system may be further enhanced by reflecting back-drivability, inertia, compliance, friction and gear effect in ball screw, and servomotor control response delay to the proposed assist control. It seems to be beneficial to estimate the subjective force of PAW and to objectify more the subjective force of human. The effectiveness and accuracy of the control may be increased by replacing the ball screw by a linear or a direct-drive motor. The servomotor was kept in velocity control mode. Another mode, torque control mode, may be tested to further justify the findings.
We used objective measurements where possible (e.g. Fig.7) though the results are somewhat based on subjective data (e.g. Fig.6). However, we argue that the subjective results are acceptable because (i) it is difficult to collect objective data in a human-robot interaction system, and (ii) this type of subjective results have already been proven reliable in many cases (e.g. [43]).
Zero-Gravity, Zero-Inertia and Zero-Load Force
It may be assumed that the PLF and perceived heaviness could be the minimum if m1=0 and m2=0 were used in (2). But, zero-gravity (m2=0) is not good for lifting Object size Before modification After modification objects with power-assist because humans lose some haptic information for zero-gravity, which reduces human's weight perception ability and situational awareness [34]- [36].Zero-inertia (m1=0) is not possible because the subjects experience severe oscillations for this case [34].
The human can keep the object stand still for a while if the grip force is very high and the load force fh is very small. At this condition fh is not zero, but the displacement is almost zero. The object may have motion even if fh=0 (human does not touch the object), but it does not indicate a human-robot system and it does not provide any power assistance to the human.
Balance and Synchronization between Two Humans
Two subjects grasped the handles and lifted the objects with power assist as shown in Fig.4. We think that the resultant load force (fh) derived in (2) can be further expressed as (11), where fh1 and fh2 are the load force for subject 1 and subject 2 respectively. fh1 +fh2 = fh (11) We assume that fh1=fh2, and fh1 and fh2 are also synchronized. If (fh1-fh2) is high and fh1 and fh2 are not so synchronized, the system may result in instability and lack of safety [28]- [29].
Slip of the Object
There was no possibility of object-slip and the subjects did not experience any slip of the objects when doing experiments with the present setup. We think that slip prevention is related to the configuration of the real robot systems. It will need to configure the real robot system in such a way that the configuration prevents the slip of objects. Object grasping devices and object's surface conditions (friction coefficient) also contribute to slip avoidance. However, operator's training and awareness are also important to prevent the slip.
Validity of the Experiment System
We could not use a real robotic system and heavy objects, but we used a simulated system, low weights, and small objects for the following reasons: (i) we, at this stage, want to reduce the costs of developing the real system because a real system suitable for manipulating heavy objects is expensive, (ii) we want to compare the findings of this paper to that of other psychological experiment results available in literatures, and for this reason our object sizes and weights should be small because most of the psychological tests use low weights and small objects (such comparison with equal basis may produce important information that may help develop the real system in near future adjusting with human perceptions such as naturalness, best feelings etc.) [27], (iii) we want to use the preliminary findings of this paper (e.g., design ideas, assumptions, hypotheses, dynamic modeling, control programming, system characteristics reflecting human-robot interactions such as relationship between actual and perceived weights, force and motion characteristics etc.) to develop a real robot capable of manipulating heavy objects in near future. We believe that the findings we have derived will work (but magnitudes may change) for heavy and large size objects. It may be true that the findings are incomplete until we validate those using heavy objects and a real robot. But, it is also true that the findings are novel, important, useful and thus have potential application for developing real robots for manipulating heavy objects.
We put m2=0.5kg in the experiment and the human who lifts the object with the system feels 25% of m2 value, i.e. 0.125kg. It means that the human will feel only 0.125kg even when he will lift a very heavy object (such as 20kg) with the real system in industry because the load will be carried by robot system (not by human) and human's cooperation (grasping and applying forces) will control motions (displacement, velocity, acceleration) of the lifted object. Hence, it will be possible for the humans to lift heavy objects with only hands and the whole body will not need to be used.
The perceived weights, load force, motion etc. are controlled. The main factor affecting biomechanical properties is the magnitude of the load felt by human when manipulating heavy objects. In our case, the human will feel only 0.125kg even when manipulating a very heavy load with the system, which is far below the biomechanical tolerance limits (e.g., compressive, tensile, and torsional strength limits, fatigue limit) at different locations of human body [44].We think that the dynamic psychophysical ratings for m1 and m2 in this paper will not only produce good maneuverability, stability, naturalness etc., but also satisfy operator's biomechanical criteria such as motions, hand movement and posture, joint torque, joint shear, joint stress, joint compression, joint work distribution, total mechanical work, muscular moments at joints, torque equilibrium, muscle force, forces acting on musculoskeletal system, low back stress etc. that will help avoid injuries, risks, vibrations and jerks on human body when manipulating heavy objects with the robot system. We did not measure muscle or nerve activities though we believe that these activities will be favorable due to small perceived weights. In the present case, the perceived weight is reduced and the motions are favorable, which is a clear indication of power assistance.
Validity of the Control Method
Position based impedance control and torque/force based impedance control produce good results. Results may be different for force control for reducing excessive force [37]- [38]. Our control was limited to position based impedance control. We used this control method for the following reasons/advantages (though it may have some disadvantages): 1. Position control compensates the effects of friction, inertia, viscosity etc. In contrast, these effects are to consider for force control, however, it is very difficult to model and calculate the friction force. Dynamic effects, nonlinear forces etc. affect system performances for force control for multi-degree of freedom system. 2. Ball-screw gear ratio is high and actuator force is less for position control. However, the opposite is true for the force control. 3. It is easy to realize the real system for the position control for high gear ratio. However, the opposite is true for the force control.
The actual system dynamics includes the thrust force of the actuator or the actuator force (fa) as given in (12). The actuator force (fa) is parallel to fh. However, the system dynamics we considered in (2) was the targeted (model) dynamics for the system.
If the difference between m and m1 is very large i.e., if (m-m1) is very big, the position control imposes very high load to the servomotor that results in instability, which is not so intensive for force control [37]- [38]. The position control method we proposed was proven effective because it produced satisfactory performances (Table 6). Position control methods have been proven effective for many similar devices [24], [30], [38][39],which justifies the validity of the proposed control method.The control in Fig.5 is not so complicated. However, there is novelty in this control that human's perception is included in the control. Again, the novel control strategy (Fig.10) was derived from it ( Fig.5) that includes human features.
The Proposed Real System
We propose to use the findings as guidelines to develop power assist devices to manipulate heavy objects in industries. The configuration of the real systems should be such that operators working with the real power assist robot may lift heavy objects manually with a power-assist process or with a power assist process in cooperation with another automatic process or with their combinations. The object may be transferred with a transfer device such as a belt-conveyer for the automatic process and with a multi-DOF power-assisted cart [24], a crane [22], a hoisting machine, a suspension system [23], a specially designed device etc. for the power-assist process. The structure of the proposed system for manipulating heavy objects may be at first a 3-DOF system consisting of vertical lifting and horizontal leftright and forward-backward translational motions [40]. Then, the system may be improved to a 6-DOF system with rotational facilities.
Conclusions and Future Works
This paper successfully presents a model of power assist system with the design of its control for lifting objects by two humans cooperatively based on weight perception, load forces and motion features. We included weight perception in dynamics and control. We determined psychophysical relationship between actual weights and PAWs, excess in load forces, and analyzed force and motion features. We then designed, implemented and evaluated a novel control scheme based on the human characteristics, which improved the performances.
This paper presents an exclusive and complete model of weight-perception-based power-assist control for cooperative lifting of objects by two humans and proves its effectiveness. We addressed most of the power-assist control parameters (section 4), satisfied most of the requirements of power-assisted manipulation (section 1.4), and thus attempted to overcome the limitations of the existing power assist systems (section 1.3). Findings of this paper are novel in terms of theory, concepts, experiments, applications, performances etc., have competitive advantages over the existing counterparts, and will help develop power assist devices that may satisfy most of the required conditions in manipulating heavy objects in industries.
We will verify the results using heavy objects and real robots.Experiments in torque control mode of servomotor will be conducted to verify the results. Separated but interacting controllers may be designed and evaluated for each of the subjects. The system will be upgraded to a real multi-DOF system.
|
2016-01-29T17:58:53.149Z
|
2012-10-01T00:00:00.000
|
{
"year": 2012,
"sha1": "006a55437cac7861e8d6c7ab69c8f2e1bc8058ff",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.5772/50894",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "006a55437cac7861e8d6c7ab69c8f2e1bc8058ff",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
15123584
|
pes2o/s2orc
|
v3-fos-license
|
A Cryogenic Integrated Noise Calibration and Coupler Module Using a MMIC LNA
A new cryogenic noise calibration source for radio astronomy receivers is presented. Dissipated power is only 4.2 mW, allowing it to be integrated with the cold part of the receiver. Measured long-term stability, sensitivity to bias voltages, and noise power output versus frequency are presented. The measured noise output versus frequency is compared to a warm noise diode injected into cryogenic K-band receiver and shows the integrated noise module to have less frequency structure, which will result in more accurate astronomical flux calibrations. It is currently in operation on the new 7-element K-band focal plane array receiver on the NRAO Robert C. Byrd Green Bank Telescope (GBT).
I. INTRODUCTION
NTENSITY flux calibration on cryogenic radio astronomy receivers has traditionally been performed using noise diodes placed outside the cryostat and routed into the cryostat and injected into the signal path between the antenna feed and the cryogenic low-noise amplifier (LNA) through a coupler (typically ~30 dB) [1,2]. The noise diode is not integrated with the cold receiver since the power dissipation of a typical diode is typically several hundred mW or more. Fig. 1 shows a block diagram of such a typical dual-polarization radio astronomy receiver.
The orthomode transducer (OMT) separates the two orthogonal polarizations received by the feed horn. Each polarization then has a noise signal injected by a coupler before proceeding through an isolator to a lownoise amplifier (LNA) and mixer. While observing, the noise diode is turned on and off with typically a 1 sec period and 50% duty cycle. The noise level is designed to be roughly 5-10% of the total system noise temperature to avoid degrading the sensitivity of the observations. For a focal plane array receiver such as the GBT K-band focal plane array (KFPA) [3], this method of noise injection vastly complicates the cable and waveguide routing inside the cryostat as well as adds extra dewar feedthrough transitions. It is highly desirable to integrate the noise generator with the This work was supported by the National Radio Astronomy Observatory, which is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc.
E. W. Bryerton is with the National Radio Astronomy Observatory, Charlottesville, VA 22901 USA (phone: 434-296-0336; fax: 434-296-0324; email: ebryerto@nrao.edu). coupler on the cold stage, eliminating all the associated cabling, vacuum feedthroughs, and thermal transitions. This also has the significant benefit of less frequency structure due to standing waves and therefore more accurate calibrations. Flat spectral baselines are critical in many observations, such as detecting very faint, broad lines like CO from high-redshift galaxies [4]. It is especially important if one is trying to conclusively detect complex prebiotic molecules with more complicated spectral signatures. Using the cryogenic noise calibration module (NCM) described in this work, the typical dual-polarization radio astronomy receiver simplifies to the block diagram shown in Fig. 2. To be integrated with the cold receiver components, the noise source must be able to generate sufficient noise power while dissipating relatively little power, less than 10 mW, especially for a multi-pixel array with one noise source per pixel per polarization. I This paper describes the design, construction, and testing of a cryogenic Noise Calibration Module (NCM) built with a commercial MMIC LNA and used in a 7-pixel K-band array receiver. This noise source produces a calibration signal at the appropriate power level, dissipates only 4.2 mW, and results in smoother spectral baselines than an injected warm noise diode.
II. SUMMARY OF TYPICAL ASTRONOMICAL CALIBRATION
The goal of astronomical flux calibration is to translate measured power levels on a given spectrometer channel output to a source spectral flux density of the observed patch of sky (S src ). The first step is to convert a measured power level, P, into a noise temperature, T. The output power from a receiver is: where k is Boltzmann's constant, G rec is receiver gain, and B is receiver bandwidth.
To remove the k, G rec , and B factors, it is convenient to measure power ratios, i.e. P sys,1 /P sys,2 = T sys,1 /T sys,2 To determine the noise temperature when pointed at an astronomical source (T src ) for which we are trying to determine its flux density (S src ), noise power when pointed at a nearby reference "blank" patch of sky (P ref ) is measured. The reference patch is assumed to be nearby so that the differences in noise contributed by the atmosphere are small. Then, the following power ratio is calculated: The desired value is T src -T ref , so (3) must be multiplied by T ref . T ref cannot however be obtained by a single measurement of power, since that will contain the kGB component, so some other method to determine T ref is needed. This is the function of the calibration noise diode or noise calibration module (NCM). While observing the nearby reference "blank' patch of sky, the NCM is switched on and off, and the following power ratio is calculated: Note that T ref,on -T ref,off is simply the noise added by the noise calibration module, T cal , so that: Substituting (5) into (3): Therefore, T src -T ref can be calculated by taking the product of two measured power ratios, P1 and P2, and T cal .
With T src determined, S src can be calculated using: where is the antenna efficiency, is the atmospheric opacity, and A=1/sin(elevation). For the GBT, this equation is: where T src is in K and S src is the spectral flux density of the source in Jy (10 -26 W/m 2 Hz ). Note that to calculate S src from T src , we need to know the antenna efficiency, . For a focal plane array, will be pixel-dependent. T cal is measured in the lab before the receiver is installed on the telescope, but to account for long-term variation, T cal is measured on the sky by observing astronomical sources of known flux density [5] and using Eq. (7). This is typically done every few hours during an observation and can measure T cal to within 1% accuracy [6], better than a typical laboratory Y-factor measurement. Since T cal is recalibrated every few hours during an observation, it is the stability of T cal over hour timescales that is the critical parameter.
III. DESIGN DESCRIPTION
The fundamental noise source is a commercial off-the-shelf (COTS) MMIC LNA from United Monolithic Semiconductor (UMS), part number CHA2092b. The MMIC LNA, though not specified by the manufacturer for cryogenic operation, generates a fairly flat noise output over 18-26 GHz at 15K ambient temperature.
A few different MMIC LNAs were packaged and tested at cryogenic temperatures before settling on this particular model. The desired quantity to maximize was the "noise generation efficiency", or the amount of thermal noise generated divided by the applied dc power to the module. The reason applied dc power is important is because this module is to be used on the cold stage of a multi-pixel array. Excessive dissipated power will increase the required cooling power of the refrigerator, potentially warming the entire cryogenic portion of the receiver and degrading the receiver noise. This noise generation efficiency was first roughly estimated to be the effective input noise temperature (as specified in the MMIC datasheet, typically as a noise figure), times the specified small-signal gain, divided by the applied dc power. Commercial MMIC datasheets typically do not give cryogenic performance data, so it was necessary to actually measure several MMICs at cryogenic temperatures (about 15K in this case) to find the MMIC with the highest cryogenic noisegenerating efficiency.
The MMICs were packaged in a WR-42 test block. The output of the LNA was bonded to a 5-mil thick Alumina microstrip to WR-42 E-plane probe transition. The transition has a simulated return loss greater than 20 dB from 18-26 GHz. The MMIC input was not terminated. It was found that terminating the input with a 50 load had little or no effect on the total noise output. For the CHA2092b, a bias of approximately 0.7 V and 6 mA produced output noise of 1000 +/-500K over the entire 18-26.5 GHz band at 15K ambient temperature, giving a cryogenic noise-generating efficiency of approximately 240 K/mW .
Measurements of the output noise sensitivity to both gate and drain bias were also performed. As expected, the noise output is much more sensitive to gate bias than drain bias. The measured noise power sensitivity to drain voltage was approximately 10,000 K/V. To keep bias voltage fluctuation from changing noise output by no more than 1%, or about 20K, the drain voltage bias should be kept stable to within 2 mV, or about 0.3% of the 0.7 V typical bias point. Fig. 3 shows the measured output noise for three different gate voltage / drain current levels. It shows gate voltage sensitivity to be about ten times drain voltage sensitivity, or 100,000 K/V, so gate voltage bias should be kept stable to within 0.2 mV, or about 0.04% of the typical -0.54 V gate bias point. In the actual receiver, the gate voltage bias is supplied through a 10:1 resistive voltage divider. The noise source output is also expected to be highly dependent on temperature, but since this module is to be used on a high-mass cold plate inside a cryogenic Dewar, this was not a concern. Measurements of the long-term stability of the noise output were also performed. Fig. 4 shows the measured output noise of the MMIC test block at 15K ambient temperature over a period of 250 minutes. The small long-term fluctuation seen is believed to be due primarily to bias voltage instability, which should be minimized using the highly regulated receiver power supplies. Since the long-term stability is the critical parameter for the noise source's intended application, the short timescale fluctuations are not a concern and are in fact a result of subtracting two successive measurements of much larger power than the ~2500K noise power contributed by the noise source under test. Indeed, the short time stability of the noise source was later confirmed by stability measurements of the entire receiver on the telescope. With the noise calibration source turned on, the Allan time was consistently measured to be about 50s.
For use in a cryogenic radio astronomy receiver with two channels (one for each sky polarization), the MMIC LNA was integrated with a six-port Bethe coupler, which injects a small amount of additional noise into each polarization channel of the receiver for calibration. The noise added to each receiver channel needs to be typically 5-10% of the total system noise, so in this case, the injected noise from the calibration module was specified to be 1.5-6.0K, implying a coupling value of approximately 25dB from the MMIC LNA output into each receiver channel. Simulation results for the coupler show that from 18-26.5 GHz, the coupling is 25 +/-1 dB. The input match presented to the noise source and main signal paths is better than -40 dB, the directivity is greater than 20 dB, and the RF channel isolation is better than 60dB. The coupler is fabricated by drilling holes from the outside of the block, as described in [7].
IV. SINGLE PIXEL RECEIVER RESULTS
The return loss and insertion loss of the K-band noise calibration module (NCM) were measured with a network analyzer. Return loss measurements are shown in Fig. 7 for each channel. The return loss of a matched K-band load was also measured and is shown to indicate the quality of the network analyzer calibration. As shown, the return loss for each channel is better than 20dB across the entire band. The measured insertion loss of both channels is shown in Fig. 8. This is measured at room temperature. At cryogenic operating temperature, the insertion loss should be even less.
This NCM was then integrated into the single-pixel KFPA receiver. Receiver noise of the entire single pixel receiver was measured in the laboratory as well as on the telescope. The NCM is biased with a constant voltage supply, so that the gate voltage, V gs , stays constant while the drain voltage, V ds , switches from zero to its nominal value (0.667 V in this case). The IF power was recorded with a spectrum analyzer at four different LO frequencies. For each LO frequency, a Y-factor measurement was performed by measuring the IF output power while the receiver is looking into an emissive target at either room or cryogenic temperature, called the hot-and cold-loads, respectively. This Y-factor measurement is performed twice, once with the calibration noise source turned on and once with it turned off, to determine the receiver's equivalent noise temperature in both states. The difference in these noise temperatures is the injected noise, or T cal , and is plotted in Figs. 9 and 10. The rms detector of the spectrum analyzer was used with a resolution bandwidth of 3 MHz and 0.2 s sweep time with 501 points across a 2 GHz span. Averaging was turned on with ten sweeps averaged. Therefore, the calculated accuracy of the measured power at each 4 MHz point is 1.1%. Ten successive points were averaged to give power every 40 MHz with 0.3% accuracy. The accuracy in each Y-factor calculation is then 0.6%. For the typical Y factor of 3 measured for this receiver, this gives an absolute accuracy of 0.18 for each Y factor measurement (with cal source on and cal source off). This results in a receiver noise temperature accuracy of 3%, or about +/-0.5K for the 15-20K receiver temperature. Subtracting two receiver temperature measurements to calculate the contributed noise temperature of the calibration source gives an accuracy of +/-1K for T cal . This +/-1K variation is seen in Figs. 9 and 10 and is not a characteristic of the noise source. The measured T cal is very similar for both the LCP and RCP channels, as expected from the symmetry of the six-port Bethe coupler. Note that V ds remained constant for all four LO frequencies.
In practice, V ds can be adjusted as a function of LO frequency to maintain a more constant T cal value. Two or three values of V ds need to be characterized to accomplish this. Having multiple T cal levels would also be useful for certain types of astronomical observations that may require specialized calibrations. Fig. 11 shows the measured T cal of the single pixel test receiver compared to the measured T cal for the old GBT Kband receiver (K0). The new T cal values were averaged over 200 MHz in order to make a better comparison with the K0 values. The K0 receiver has a traditional noise calibration architecture, where a noise diode outside the Dewar at room temperature is injected into the signal path via a cold coupler inside the Dewar. As shown, the spectral structure of the new T cal is much smoother. The structure in the K0 calibration signal is likely not from the noise diode, but from the thermal transition, coaxial-to-waveguide transitions, and cable length between the noise diode and the cryogenic coupler. Also shown is a fourth degree polynomial fit to the new T cal showing how easily the T cal spectrum can be modeled when it is this smooth. The standard deviation of the difference between the actual measurement and the polynomial fit is 7.5% of the T cal value. This 7.5% is the error one would have in using this polynomial model of T cal rather than using an astronomical calibration to determine T cal versus frequency. T cal was measured several times in the lab over a period of one month and was stable within a few percent. The single pixel test receiver was then placed on the telescope and successfully used to make calibrated astronomical measurements.
V. SEVEN PIXEL ARRAY RESULTS
Eight more K-band noise calibration modules were constructed for use in the GBT K-band 7-element focal plane array. Fig. 12 shows the measured injected noise at channel B of all eight modules plus channel A of one of the modules, indicating a high level of repeatability in the noise spectrum. Since each channel of each pixel can be independently calibrated, it is not critical that these values lie on top of one another, but only that they are all relatively smooth and can be electronically tuned to give the correct range of injected noise values. Fig. 11. Measured injected noise versus frequency of K-band noise calibration module as compared to measured injected noise of current K-band receiver using warm noise diode as noise source. Fig. 13 shows the cryogenic portion of the 7-pixel K-band array receiver, indicating the placement of the NCMs. Note that the odd shape of the NCMs is to ensure closest packing of the feed horns.
VI. CONCLUSION
This paper describes the design and performance of an integrated cryogenic noise source and coupler used to provide a stable noise source with low power dissipation for astronomical receiver calibration, particularly well-suited for array receivers, where excess cabling and thermal transitions are eliminated. A prototype unit was characterized for return loss, insertion loss, noise output, and noise output stability as part of a single pixel test receiver. Several more modules were produced and integrated into a 7-pixel array receiver.
An interesting outgrowth of this work is the possibility of using a similar module for room temperature Y-factor measurements. As presented, the output noise is highly dependent on the drain current. In a well-regulated thermal environment with a well-regulated power supply, it may be possible to generate two or more well-defined and repeatable noise spectra over a certain bandwidth, which can then be used as the "hot" and cold" loads for a Y-factor measurement. Multiple noise output levels would allow for different ratios of "hot" to "cold" noise powers, chosen to best suit the expected noise temperature of the device under test.
|
2017-02-20T22:28:15.340Z
|
2011-06-06T00:00:00.000
|
{
"year": 2011,
"sha1": "2380c0ef6f3da2d0caa5bf54daf86a91c63c4b35",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1106.0944",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "13681e0c73d4d960e353df99105d596247123bb5",
"s2fieldsofstudy": [
"Engineering",
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
211818133
|
pes2o/s2orc
|
v3-fos-license
|
Coherent states of parametric oscillators in the probability representation of quantum mechanics
Glauber coherent states of quantum systems are reviewed. We construct the tomographic probability distributions of the oscillator states. The possibility to describe quantum states by tomographic probability distributions (tomograms) is presented on an example of coherent states of parametric oscillator. The integrals of motion linear in the position and momentum are used to explicitly obtain the tomogram evolution expressed in terms of trajectories of classical parametric oscillator
Introduction
In quantum mechanics, the states of a particle, e.g., of the harmonic oscillator, are identified with the wave functions ψ(x, t) satisfying the Schrödinger evolution equation [1], where x is the oscillator position and t is time. The energy levels and stationary states 2 We dedicate this paper to the memory of Roy Jay Glauber, the great scientist and Nobel Prize Winner, on his first death anniversary, December 26, 2019. Ad Memoriam of Roy Glauber and George Sudarshan is published in [10,11] and is also available on link.springer.com/article/10.1007/s10946-019-09805-4 and www.mdpi.com/2624-960X/1/2/13. of the oscillator and other systems are obtained by solving the stationary Schrödinger equationĤψ E (x) = Eψ E (x), whereĤ is the quantum system Hamiltonian. Among all solutions of the evolution Schrödinger equations, there are specific Gaussian-packet solutions for which the probability distribution P (x, t) = |ψ(x, t)| 2 of the oscillator position at a given time moment is described by the normal probability distribution of the position x, with a given mean valuex(t) and the dispersion σ(t) =x 2 (t) − (x(t)) 2 . Such packets were studied by Schrödinger [1], and these oscillator states are similar to classical oscillator states with fluctuating position and momentum. In 1963, while studying the coherence properties of photons, Roy Glauber [2] introduced the notion and terminology of the field coherent states; see also [3][4][5][6][7][8][9].
For a single mode, the field is modeled by the quantum harmonic oscillator, and the wave function ψ α (x, t) of the harmonic oscillator is the Gaussian packet satisfying the Schrödinger evolution equation. Generic Gaussian states and entropic inequalities for these states for multimode photon states were studied in [12].
The aim of our work is to discuss the coherent states of the parametric oscillator, i.e., of the oscillator with time-dependent frequency ω(t). The Schrödinger evolution equation for such oscillator was solved in [17]. There is no energy levels of the parametric oscillator, and the energy is not the integral of motion. For a classical parametric oscillator, the integral of motion, being quadratic in the position and momentum, was found by Ermakov [18].
The quantum operator quadratic in the position and momentum, being the integral of motion, contains an explicit dependence on time in the Schrödinger representation, as was found in [19]. This quantum integral of motion is an analog of the classical Ermakov invariant, and it was used to find different solutions to the Schödinger equation in [19].
It was shown in [20] that the parametric oscillator has the linear (in the position and momentum) integrals of motionÂ(t) and † (t), which have the commutation properties of bosonic annihilation and creation operators, i.e., [Â(t), † (t)] = 1. In view of what we said above, one can extend the construction of Glauber coherent states to the case of the parametric oscillator; see, e.g., [21]). In view of developing the technique of homodyne tomography of photon states [22] based on the relation between the Radon transform [23] of the Wigner function [24] of the quantum system state with optical tomogram, which is a fair probability distribution of the photon quadrature found in [25,26], the suggestion to identify the quantum state with the probability distribution as a primary object was done in [27]; see also the review [28].
The kinetic equation for the tomographic probability distribution, which is the optical tomogram of the quantum state, with the wave function obeying the Schrödinger evolution equation, was obtained in [29,30]. This equation is compatible with the kinetic equation for the symplectic tomogram of quantum states introduced and studied in [27,31]. Such tomogram exists and obeys the kinetic equation for the fair probability distributions also in the case of a spin-1/2 particle, with the wave function satisfying the Pauli equation [32].
Thus, in addition to the review of Glauber's coherent states for the wave function of the parametric oscillator, we consider the oscillator coherent states in the probability representation of quantum mechanics.
We present the evolution for the tomographic probability distributions determining the oscillator states and construct the probability distributions of the oscillator position in the form of normal distribution with time-dependent parameters. The tomographic probability distributions identified with the coherent states satisfy the kinetic equations equivalent to the Schrödinger equation for the wave function and the von-Neumann equation for the density matrix of the parametric oscillator. As an application of the formalism, we discuss the stimulated Raman scattering process in the probability representation of quantum mechanics in [33,34,35]. The problem of parametric oscillator was studied using different methods in [36][37][38][39][40][41][42][43].
This paper is organized as follows.
In Sec. 2, we present the method of linear integrals of motion to find coherent states of a parametric oscillator. In Sec. 3, we give a review of the conditional probability representation of quantum states of the parametric oscillator. In Sec. 4, we construct the joint probability distribution of three random variables for the parametric oscillator in coherent states. In Sec. 5, we consider the evolution of the parametric oscillator in the probability representation of quantum mechanics. Our conclusions and prospectives are given in Sec. 6.
Integrals of Motion of Parametric Oscillator and Coherent States
The parametric oscillator has the Hamiltonian We assume the Planck constanth = 1, the oscillator mass m = 1, and frequency at time was solved in [17], and various methods to study this equation and its solutions were suggested in [20]. The method based on finding the system's integrals of motion, which are operators quadratic in the position and momentum, was used in [19].
The Ermakov integral of motion for a classical parametric oscillator was found in [18].
The quantum version of the classical Ermakov invariant depends on the solution of the classical nonlinear equation [36,37,38,40,41]. Invariants, which are linear in the position and momentum operators, were found in [20].
The time-dependent operatorsÂ(t) and † (t) of the form are the linear integrals of motion satisfying the conditions for function ǫ(t) satisfying the equation of motion for the classical parametric oscillator For initial conditions of the function ǫ(t) of the form ǫ(0) = 1,ǫ(t) = i, the integrals of motion (2) and (3) satisfy the commutation relation and these operators coincide for t = 0 with the annihilationâ and creationâ † operators of the harmonic oscillator, i.e., which is the integral of motion The coherent state reads One can check that the function is the normalized solution to the Schrödinger equation (2); for t = 0, it is equal to the wave function ψ 0 (x) = π −1/4 exp(−x 2 /2) of the oscillator ground state satisfying the condition The Fock states of the parametric oscillator |n, t satisfying the Schrödinger equation and the condition † (t)Â(t)|n, t = n|n, t , where n = 0, 1, 2, . . ., are given by the formula The coherent states of the parametric oscillator (6) are expressed in terms of Fock states (8), Since the coherent state |α, t is given by Eq. (6), which provides the relation one has an explicit expression for the wave function of the coherent state in the position representation; it reads The wave function ψ n (x, t) = x|n, t can be obtained using the generating function for Hermite polynomials and formula (9), where the parameter α is used to get the coefficient in the series determining the vector |n, t and consequently the wave function ψ n (x, t) in the decomposition of the coherent-state wave function (10). We obtain the wave function ψ n (x, t) in an explicit form as follows: For ω(t) = 1 and ǫ(t) = e it , the coherent-state wave function becomes The wave function (12) has the standard form 3 Tomographic Probability Representation of the Para-
metric Oscillator States
The density matrix ρ α (x, x ′ , t) of coherent states (6) of the parametric oscillator has the Gaussian form, In [27], the construction of the symplectic tomographic probability representation of the system states with continuous variables, like the oscillator, was proposed using the invertible map of the state density operators at time t = 0 onto fair conditional probability distributions w ρ (X|µ, ν) of a random variable (oscillator position) −∞ ≤ X ≤ ∞. It depends also on the parameters −∞ < µ, ν < ∞ characterizing the reference frame in the phase space (q, p), where this position is measured. The map is given by the relation The function is called the symplectic tomogram of the oscillator state. The given formula can be used to express the density operator in terms of the tomogram (probability distribution) w ρ (X|µ, ν), i.e., In (16) and (17), operatorsq andp are the position and momentum operators, respectively; also we assume the Planck constanth = 1 as well as the oscillator mass m = 1.
For pure states |ψ , the expressions for the symplectic tomogram can be given in terms of the fractional Fourier transform of the wave function [44], Tomograms of pure and mixed states are nonnegative and satisfy the normalization condition for arbitrary values of parameters µ and ν, i.e., w ρ (X|µ, ν) dX = 1.
The Wigner function can be reconstructed if the tomogram is known, For experimental study of photon states, optical tomograms w (opt) (X|θ) measured by homodyne detectors, where X is the photon quadrature and θ is a local oscillator phase, are used to reconstruct the Wigner function [22].
This expression follows from the relation determined by the contribution of the Dirac delta-function term in the density operator (17) which provides the equality For the parametric oscillator state with the wave function (7), one has q = 0, p = 0, and the covariance term satisfies the relation depending on the correlation coefficient This equality means that the state (7) provides the bound in the Schrödinger-Robertson [45,46] uncertainty relation.
Thus, we have the following property of the quantum parametric oscillator state (7).
Thus, the fair probability distribution (32) describes coherent states of the parametric oscillator, and this probability distribution contains complete information on the state.
Conditional and Joint Probability Distributions Determining the Oscillator's Coherent States
The symplectic tomographic probability distribution w ψ (X|µ, ν) of the parametric oscillator state with the wave function ψ(x) is determined in terms of the fractional Fourier transform of the wave function (18) [44], where X is the oscillator position measured in the reference frame of the oscillator phase space determined by real parameters µ and ν; −∞ < µ, ν < ∞.
In the case of the classical parametric oscillator, one has the relation X = µq + νp; for µ = s cos θ and ν = s −1 sin θ, the reference frame parameters s and θ provide the scale changes of the form q → q ′ = sq and p → p ′ = s −1 p, along with the rotation of the axes q ′ → X = cos θ q ′ + sin θ p ′ and p ′ → P = sin θ q ′ + cos θ p ′ . The tomogram does not depend on the variable P.
Representation
For a given Hamiltonian of the parametric oscillator, the unitary evolution of the state vector |ψ(t) =Û(t)|ψ(0) provides the evolution of the density operatorsρ ψ (t) =Û(t)|ψ(0) ψ(0)|Û † (t) and the evolution of the tomographic probability distribution of the form In this section, we demonstrate that the evolution is given, using a specific change of the variables X, µ, ν → X(t), µ(t), ν(t) determined by the classical trajectories ǫ(t) andǫ(t).
The density operatorρ(t) of an arbitrary state of the parametric oscillator evolves according to the following form of the solution of the von Neumann equationρ Here, the unitary operatorÛ (t) is the solution of the Schrödinger equation Calculating the tomographic probability distribution w ρ (X|µ, ν, t), in view of (16), we arrive at Using the relationÛ where operatorsq H (t) andp H (t) are the position and momentum operators of the parametric oscillator in the Heisenberg representation, we obtain the tomogram w ρ (X|µ, ν, t) as follows: Parameters µ H (t) and ν H (t) are linear combinations of the parameters µ and ν with coefficients depending on the functions ǫ(t) andǫ(t). The integrals of motionÂ(t) and A † (t) (3), satisfying comutation relations (4) and the conditions provide the possibilities to obtain the Heisenberg position operatorq H (t) and momentum operatorp H (t) satisfying the equations as the linear combination of operatorsq andp. This means that we obtain the following transform of the Dirac delta-function: Finally, we have explicit expressions for operatorsq H (t) andp H (t) in terms of complex functions ǫ(t) andǫ(t); they read In view of these explicit expressions, we arrive at Thus, for an arbitrary state of the parametric oscillatorρ(0), the initial tomogram w ρ(0) (X|µ, ν) becomes the tomographic probability distribution with the time dependence given by formula (48), where the parameters µ H (t) and ν H (t) are given by (53). Such kind of tomographic probability evolution takes place for arbitrary systems with Hamiltonians quadratic in the position and momentum.
Conclusions
To conclude, we point out the main results of our work.
We reviewed the known solution to the Schrödinger equation for parametric oscillator.
We constructed the probability distributions, which can be identified with coherent states of a parametric oscillator. The dynamics of symplectic and optical tomographic probability distributions for the states of a quantum parametric oscillator is expressed in terms of classical trajectories of the classical parametric oscillator. Coherent states of a quantum parametric oscillator, which describe the phenomenon of squeezing and correlation of the oscillator's position and momentum, are considered in the probability representation of quantum mechanics, and the optical and symplectic tomograms of the oscillator are obtained explicitly. Different aspects of the tomographic approach to studying photon states, oscillator states, and qubit states were considered in [49][50][51].
The tomographic probability distributions can also describe classical oscillator states identified with the probability densities in the phase space. The classical oscillator states with Gaussian probability density in the phase space have symplectic and optical tomograms, which are normal probability distributions w cl (X|µ, ν) as in the case of quantum parametric oscillator considered in this work. But the set of such states for a classical parametric oscillator contains tomograms violating the Schrödinger-Robertson uncertainty relation. If one reconstructs the formal density operator, using such tomographic probability distribution of the classical parametric oscillator state with Gaussian tomogram and Gaussian probability density in the phase space, the formal density operator will have negative eigenvalues. The relation of tomograms of classical and quantum oscillators, as well as the case of multimode parametric oscillator and its coherent states, will be discussed in future publications.
|
2020-02-13T09:22:04.912Z
|
2020-01-01T00:00:00.000
|
{
"year": 2020,
"sha1": "3c8b0603ab572d5a4db8480734c7fe354f614367",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/2003.01556",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "19253a4b38286a1ffcec554406dc384610d8e179",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
226278882
|
pes2o/s2orc
|
v3-fos-license
|
The Drosophila Split Gal4 System for Neural Circuit Mapping
The diversity and dense interconnectivity of cells in the nervous system present a huge challenge to understanding how brains work. Recent progress toward such understanding, however, has been fuelled by the development of techniques for selectively monitoring and manipulating the function of distinct cell types—and even individual neurons—in the brains of living animals. These sophisticated techniques are fundamentally genetic and have found their greatest application in genetic model organisms, such as the fruit fly Drosophila melanogaster. Drosophila combines genetic tractability with a compact, but cell-type rich, nervous system and has been the incubator for a variety of methods of neuronal targeting. One such method, called Split Gal4, is playing an increasingly important role in mapping neural circuits in the fly. In conjunction with functional perturbations and behavioral screens, Split Gal4 has been used to characterize circuits governing such activities as grooming, aggression, and mating. It has also been leveraged to comprehensively map and functionally characterize cells composing important brain regions, such as the central complex, lateral horn, and the mushroom body—the latter being the insect seat of learning and memory. With connectomics data emerging for both the larval and adult brains of Drosophila, Split Gal4 is also poised to play an important role in characterizing neurons of interest based on their connectivity. We summarize the history and current state of the Split Gal4 method and indicate promising areas for further development or future application.
INTRODUCTION
At the end of his scientific autobiography, Francis Crick presciently noted that for neuroscience research to progress ''it would be useful to be able to inactivate, preferably reversibly, a single type of neuron in a single area of the brain'' (Crick, 1988). This desideratum was motivated by the crudeness of available methods for manipulating brain activity. When this passage was written, the technologies that would enable more refined neural manipulations were already being created, as molecular biology-Crick's first field of endeavor-steadily revolutionized other areas of biology. In 1982, Rubin and Spradling (1982) had demonstrated that a eukaryotic transposon could be used to ferry a foreign gene into the germline of a metazoan-Drosophila-and be expressed in its somatic cells. Using this method of germline transformation, Mark Ptashne's group demonstrated in 1988-the same year Crick's autobiography was published-that the yeast transcription factor, Gal4, could drive the expression of a second transgene introduced into the fly genome behind Gal4's DNA recognition site, or Upstream Activating Sequence (UAS, Fischer et al., 1988). Within a scant 5 years, Brand and Perrimon (1993) generalized this capability, creating a method for yoking Gal4 expression to the regulatory elements of randomly targeted genes, and within another 2 years, this ''Gal4-UAS system'' had been used to direct the expression of a neuronal suppressor, tetanus toxin light chain, to specific subsets of neurons in the fly brain (Sweeney et al., 1995). Reversible inactivation became possible in 2002 with the creation of a UAS-expressible version of Shi ts1 , a temperature-sensitive, dominant-negative mutant of the Drosophila dynamin gene which is required for sustained neurotransmission (Kitamoto, 2002). Methods for temperaturemediated neuronal activation followed, as did the explosive development of ''optogenetic'' tools for light-mediated neuronal activation and inactivation (Bernstein et al., 2012). Today, the toolkit of effector transgenes available to neurobiologists to manipulate and monitor neuronal function in flies and other genetic model organisms make Crick's original request seem somewhat quaint Martin and Alcorta, 2017;Luo et al., 2018). Although it remains an aspirational goal to be able to selectively target the expression of such transgenes to each individual neuronal cell type in an animal, advances in genetic targeting techniques are placing even this goal within reach.
Cell types are fundamentally distinguished by the genes that they express and genetic methods for targeting particular cell types follow a common strategy. The genetic regulatory elements (i.e., enhancers) of cell-type-specific genes are conscripted to drive the expression of an activator, such as Gal4. Just as in the two original implementations of the Gal4-UAS system, this can be done in two ways. A Gal4 construct can be fused to identified enhancer fragments of a native gene so that Gal4 is expressed under the control of these enhancers when the construct is inserted into the genome ( Figure 1A). Alternatively, a Gal4 expression construct can be inserted into or near a gene in such a way that Gal4's expression is driven by the endogenous enhancers regulating the expression of that gene. Because few genes-and more specifically, few enhancers-are truly cell-type specific, this strategy usually must be augmented by other methods for further delimiting either Gal4 expression or-what has been more generally useful-its scope of activity.
Gal4's transcriptional activity can be directly blocked by an extremely effective natural inhibitor encoded by the yeast gene Gal80. By placing Gal80 expression under the control of a second enhancer, the activity of which overlaps with that of the enhancer(s) driving Gal4 activity, one can restrict Gal4 activity to only cells in the non-overlapping part of the expression pattern (Lee and Luo, 1999;McGuire et al., 2004). This strategy is often described as implementing a logical NOT gate on Gal4 expression. While strategies that effect NOT gates are useful in excluding cells or cell types from a Gal4 expression pattern, methods that permit positive, rather than negative, selection have distinct advantages in selecting cell types. Positive selection, by implementing a logical AND function, allows one to isolate cell types based on two genes that they co-express rather than one which they co-express and one that they do not. One combinatorial strategy for implementing an AND gate impairs not Gal4 activity per se, but instead its ability to activate the expression of a particular UAS-transgene. This is accomplished by interposing a recombinase-removable translational ''stop cassette'' between the UAS sequence and the sequence encoding the transgene (Stockinger et al., 2005). Removal of this cassette in cell types that express the recombinase effectively restricts the scope of Gal4 activity to only those cell types that express the recombinase. A disadvantage of this strategy is that it requires a unique recombinase-sensitive version of each UAS-transgene that one might want to express. Also, the excision of the stop cassette is permanent, which can result in transgene expression in unwanted cell types if there is developmental variation in the pattern of recombinase expression. A more general strategy that permits positive selection is a derivative of the Gal4-UAS system called Split Gal4.
Like the Gal4-UAS system, the development of Split Gal4 was facilitated by insights derived from the earlier molecular biological investigation of transcription factor properties. Gal4 had been shown to possess distinct protein domains for binding to DNA and for activating transcription (Keegan et al., 1986;Ma and Ptashne, 1987). These two domains were incapable of promoting gene expression alone when separated, but if fused to interaction domains that brought them together they could reconstitute Gal4 transcriptional activity. This capacity became the basis of the ''yeast two-hybrid'' system widely used to identify naturally occurring protein-protein interaction domains (Fields and Song, 1989). By fusing the DNA-binding (DBD) and transcription activation (AD) domains of Gal4 to strong, heterodimerizing leucine zippers, Luan et al. (2006) exploited this feature of Gal4 to create a system in which the two Gal4 domains could be independently targeted to different cells using distinct enhancers ( Figure 1B). Only those cells in which both enhancers were active would express both Gal4 components and thus reconstitute Gal4 activity. This Split Gal4 method can target single cells or cell types in the Drosophila nervous system where it has found its greatest application. With a range of tools now available to facilitate its application, it has become the workhorse for mapping neural circuits in the fly. This method-its development, its essential toolbox, its application, and its potential for future use-is the subject of this review.
Original Instruments
When the Split-Gal4 system was introduced, it consisted of three components: the Gal4 DNA binding domain (Gal4DBD; amino acids 1-147 of the native Gal4 sequence) and two alternative transcription activation domains (AD; Figure 1C). The first AD corresponded to the native Gal4AD (i.e., ''Gal4AD II,'' amino acids 768-881), while the second corresponded to the more potent AD domain of the herpes simplex virus transcription factor VP16. The Gal4DBD was fused via a short linker to one of a pair of high-affinity, heterodimerizing leucine zippers (Moll et al., 2001, called here for simplicity Zip + and Zip − ), while the Gal4AD and VP16AD were fused FIGURE 1 | The Split Gal4 system. (A) The binary Gal4-UAS expression system can be used to target the expression of a reporter or effector (green) to a group of cells (red circle) in which an enhancer (Enhancer 1) is active. As illustrated in the right-hand schematics, Enhancer 1 drives expression of the Gal4 transcription factor (red) which in turn drives expression of the reporter or effector gene (green), which is placed downstream of Gal4's DNA binding site (UAS). (B) The Split Gal4 system uses two enhancers with activity in overlapping cell groups (red and blue circles) to target reporter or effector expression (green) to the intersection of the two groups. The intersectional logic of expression is shown schematically in the Venn diagram (left). The right-hand panels illustrate the transcriptional mechanisms: one enhancer (Enhancer 1) is used to target expression of the transcription activation domain (AD) of Gal4 or some other transcription factor fused to the Zip + leucine zipper, while the other (Enhancer 2) is used to target expression of the Gal4 DNA binding domain (Gal4DBD) fused to the Zip + leucine zipper. Association of Zip + and Zip + brings the GalDBD and AD components together to reconstitute Gal4 transcriptional activity and drive expression of UAS-transgenes. (C) Design of Zip − -Gal4DBD and Zip + -AD constructs. Two Zip − -Gal4DBD constructs have been made. They share the same sequence, but the one made by Pfeiffer et al. (2010; designated here as dGal4DBD) is codon-optimized for use in Drosophila and is placed behind a Drosophila synthetic core promoter. Activation domains from three different transcription factors have been used to make Zip + -AD constructs. Zip + -p65AD and Zip + -dVP16AD are codon-optimized and show strong, high-fidelity expression.
to the complementary zipper to cause them to associate with the Gal4DBD when both components were expressed in the same cell. Fly lines individually expressing the Zip − -Gal4DBD and Zip + -AD constructs were termed ''hemidrivers,'' and pairing the Zip − -Gal4DBD with either a Zip + -Gal4AD or Zip + -VP16AD hemidriver was shown to promote transcription of UAS-transgenes. The Zip − -Gal4DBD/Zip + -VP16AD pair had the advantage of being considerably more efficacious in doing so.
A downside of the Zip + -VP16AD construct, however, was that when expressed under the control of specific enhancers it showed significant ectopic expression and was not therefore useful for precise targeting. Luan et al. (2006) demonstrated that the Zip − -Gal4DBD could be faithfully expressed in specific populations of cells using defined enhancers and used with enhancer-trap Zip + -VP16AD lines to restrict expression to smaller groups of cells within the population. They subsequently demonstrated the efficacy of this approach using a Zip − -Gal4DBD driven under the control of the promoter for Bursicon, a hormone that is critical for the expansion and hardening of the wings after the emergence of adult flies (Luan et al., 2012). By screening a library of several hundred Zip + -VP16AD enhancer-trap lines, the authors isolated Split Gal4 hemidriver pairs that selectively expressed in Bursiconexpressing neurons of either the abdominal or subesophageal ganglia (Figure 2A). They used these lines to demonstrate that activation of a single pair of neurons in the subesophageal zone (SEZ) was sufficient to command wing expansion in newly eclosed flies. Similarly, the Jefferis laboratory generated a much larger Zip + -VP16AD hemidriver library of approximately 2,000 lines, which they screened to isolate subsets of cholinergic neurons that expressed the transcription factor fruitless (Kohl et al., 2013) and subsets of neurons with expression in the lateral horn .
Improved AD Constructs
An alternative version of the VP16AD construct, in which a potential Hox gene binding site had been eliminated and the codon usage had been optimized for expression in Drosophila, showed considerably higher fidelity. This AD, called ''dVP16AD,'' ( Figure 1C) was first used to restrict expression of Zip + -dVP16AD to a subset of glutamatergic targets of the R7 photoreceptors in a study of the neural circuitry underlying color discrimination in flies by Gao et al. (2008). When paired with a Zip − -Gal4DBD expressed under the control of an enhancer for the histamine-gated chloride channel encoded by the ort gene (i.e., ort Gal4DBD ), the vGlut dVP16AD hemidriver restricted expression to three distinct glutamatergic cell types in the optic lobe, including the Dm8 neurons, which were shown to be responsible for UV preference. These and other Split Gal4 hemidrivers subsequently found use in the dissection of motion detection circuits in the visual system (Joesch et al., 2010;Clark et al., 2011). Enhancer trap production of Zip + -dVP16AD lines by Melnattur et al. (2014) was subsequently used to identify hemidriver pairs which in combination with an ort Gal4DBD identified specific subsets of first-order projection neurons of the medulla involved in color discrimination (Figures 2B,C).
A second Split Gal4 AD construct made with the activation domain of the human p65 transcription factor (i.e., Zip + -p65AD; Figure 1C) was introduced in 2010 by Pfeiffer et al. (2010) who demonstrated that this construct, like dVP16AD, could drive robust and high-fidelity expression of a reporter transgene in combination with a Zip − -Gal4DBD. The Zip + -p65AD, together with a Drosophila codon-optimized version of the Zip − -Gal4DBD (i.e., dGal4DBD; Figure 1C) introduced by the same authors has seen subsequent widespread use. Numerous studies have now confirmed the efficacy of the Gal4DBD, dGal4DBD, dVP16AD, and p65AD constructs shown in Figure 1C in a variety of contexts. All can be used to drive expression restricted to the cell-types dictated by the enhancers used to express them. It should be noted that while the fidelity is good, even hemidrivers made with the optimized Split Gal4 constructs can exhibit occasional expression that is not evident in the patterns of parent Gal4 drivers, and verification FIGURE 2 | Cell type-specific expression achieved with Split Gal4. (A) The B AG and B SEG are groups of neurons in the ventral nerve cord (VNC) and subesophageal zone (SEZ), respectively, that express the hormone, Bursicon (left). A Burs Gal4DBD hemidriver in combination with two distinct enhancer-trap Zip + -VP16AD hemidrivers can be used to individually target each group (right panels). (B,C) Split Gal4 parsing of medulla neurons. (B) Medulla neurons that receive input from photoreceptors are labeled by the ort-C1a-Gal4 driver (ort C1a -G4, left). Three subpopulations of these neurons, Tm5a, Tm5c, and Tm20, with different projection patterns, are identified by an ort-C1a Gal4DBD hemidriver used with different enhancer-trap Zip + -dVP16AD hemidrivers (right panels). of fidelity may be necessary in critical cases (Pfeiffer et al., 2010;Cichewicz et al., 2017).
A Split Gal4 Repressor: the Killer Zipper
Although Zip + -dVP16AD and Zip + -p65AD hemidrivers both promote significantly more robust expression of UAS-transgenes than the Zip + -Gal4AD in Split Gal4 applications, only the latter construct is repressible by Gal80 and is therefore useful for implementing a second (NOT) intersection if the further restriction of expression is required. Although the single intersection effected by Split Gal4 can often provide an impressive reduction in the number of cell types seen with Gal4, it is not uncommon for a Split Gal4 pattern to retain at least a small number of residual cell types outside of the desired pattern. In this case, further restriction of expression can be advantageous. Efforts to make a Zip + -Gal4AD construct by changing either the geometry and linker length of the original construct (Luan et al., 2006) or by using the full-length Gal4 activation domain (Pfeiffer et al., 2010) failed to improve efficacy. Dolan et al. (2017) pursued an alternate strategy of creating a repressor for Split Gal4 activity that could serve an analogous purpose to Gal80. The resulting ''Killer Zipper'' construct (KZip + ) consists of a dGal4DBD fused to the Zip + leucine zipper so that it competes with KZip + -AD constructs for binding to the normal Zip − -Gal4DBD ( Figure 3A). Because the active Gal4 transcription factor is a dimer, in which two Gal4DBDs form the DNA-binding pocket, the KZip + construct not only competes with AD constructs to form transcriptionally incompetent Gal4DBD dimers, but these dimers can bind to UAS sites and block binding of transcriptionally competent Zip − -Gal4DBD-Zip + -AD pairs. Because the efficacy of the KZip + construct will depend on its intracellular concentration, which will depend on the strength of the enhancer used to drive its expression, Dolan et al. (2017) created a set of universal KZip + constructs, placed behind a LexAop promoter. High-level expression of these constructs, some of which express tags that can be used to track expression ( Figure 3B), can then be attained by using a LexA driver that expresses in the cell type(s) to be eliminated from a pattern. Both LexAop-KZip + and enhancerdriven KZip + constructs have proven useful in delimiting Split Gal4 expression in neuroblasts (Carreira-Rosario et al., 2018;Seroka and Doe, 2019).
Tools for Targeting Expression
Just as the components of the Split Gal4 system have improved since its inception, so have the methods required for directing their expression to generate useful intersections of enhancer expression patterns. When the Split Gal4 system was introduced, few characterized enhancers existed that could be used to make Split Gal4 lines with gene-specific expression patterns. Methods for converting existing Gal4 enhancer-trap lines with desirable expression patterns into Split Gal4 lines with equivalent expression were also cumbersome as was the process of making and screening new Split Gal4 enhancer FIGURE 3 | Restricting Split Gal4 expression using the Split Gal4 repressor, KZip + . (A) As shown in the Venn diagram (left), the Killer Zipper (KZip + ) can be used to exclude Split Gal4 activity from cells within an intersection. Where the activity of the enhancer used to drive KZip + expression (Enhancer 3, purple circle) overlaps with that of the enhancers used to drive expression of the Zip + -AD (red circle) and Zip − -Gal4DBD (blue circle) constructs (see Figure 1B), Split Gal4 activity is repressed and no reporter expression is observed. This is illustrated in bottom-right panels, where the solid lines indicate the expression of all three constructs. Reporter expression is restricted to only that part of the intersection where Zip + -AD and Zip − -Gal4DBD alone are expressed (upper right-hand panels; dotted lines). (B) Available Killer Zipper constructs include a basic KZip + that can be expressed under the control of a specific enhancer (left) and several "universal" constructs that express KZip + constructs under the control of LexA drivers. Two of the latter are shown (right), one of which bears a hemagglutinin (HA) tag and the other of which co-expressed a nuclear LacZ molecule. HA and nLacZ permit the detection of KZip + expression in cells by immunostaining.
trap lines. In the intervening years, numerous technical developments have facilitated progress in all of these areas and many new resources have been generated that give researchers interested in using Split Gal4 a variety of readily implemented options.
Generating Split Gal4 Lines With Gene-Specific Expression
An obvious and important use of the Split Gal4 system is to target cell groups that lie at the intersection of expression of two genes of interest. This application might be used for either ''cell discovery'' or ''cell characterization'' depending on whether one is trying to identify neurons that express both genes or to characterize the properties of neurons known to be distinguished by their expression of the two genes. In either case, gene-specific expression of the Zip − -Gal4DBD and Zip + -AD components is required. When the Split Gal4 technique was introduced such gene-specific expression could be achieved either by using one of the few characterized DNA sequences known to contain the enhancers responsible for the expression of a gene or by tediously converting an enhancer-trap Gal4 line known to exhibit gene-specific expression into a corresponding Split Gal4 line using homologous recombination (see for example Gao et al., 2008). Although the number of enhancer fragments that faithfully replicate the expression of a native gene remains small-and most genes are now known to be under the control of multiple, often spatially dispersed, enhancers-techniques for expressing transgenes in a gene-specific manner have been considerably simplified by new methods that permit one to couple the expression of a transgene to that of a native gene or to easily exchange existing Gal4 transgenes with modules containing Split Gal4 components.
Three principal tools underlie new methods. One is the ΦC31 integrase (Groth et al., 2004), which facilitates the modular genetic exchange of constructs into genomic loci at which an attP integrase recognition site has been introduced (Venken et al., 2006;Bischof et al., 2007). The second tool is the Cas9 nuclease, which permits sequence-specific editing at arbitrary genomic loci using CRISPR-based guide RNAs (Gratz et al., 2013;Jinek et al., 2013). The latter tool permits the introduction of highly specific breakpoints in genomic DNA to facilitate transgene replacement by homologous recombination. A final enabling technology that has permitted researchers to co-opt the regulatory elements governing the expression of a gene of interest is the viral T2A peptide . Insertion of the sequence encoding T2A into a gene of interest causes two independent polypeptides to be translated, one encoded by the sequence before the T2A C-terminus and one encoded by the sequence following it. By placing transgenes encoding Gal4 or Split Gal4 components downstream of a T2A sequence one can co-express them with a gene of interest without explicit knowledge of that gene's enhancers.
A technology that makes use of all three tools to permit gene-specific expression of Split Gal4 components is the Trojan exon method (Diao et al., 2015). Trojan exons are synthetic exons that can be introduced into so-called ''coding introns'' (i.e., introns flanked by exons that contain coding sequence of a gene). The presence of a strong splice acceptor site (SA) before the Trojan exon ensures incorporation of the exon into the mRNA transcribed from the gene into which it is inserted so that its transgene is translated. Using ΦC31, Split Gal4-encoding Trojan exons can readily be inserted into MiMIC transposons located in coding introns (Venken et al., 2011; Figure 4A, top). Approximately 1,500 Drosophila genes have such MiMIC transposons and over 600 of these have been converted into Trojan Gal4 lines by the Drosophila Gene Disruption Project (GDP) using genetic methods that do not require germline injections (Lee et al., 2018). Germline injections of Split Gal4 constructs are required to generate Split Gal4 lines from the same MiMIC insertions. Many genes do not have MiMIC insertions, but Diao et al. (2015) also created a MiMIC-like ''Trojan exon Gal4 expression module'' (TGEM) which can be inserted into the coding introns of arbitrary genes using CRISPR/Cas 9 technology. These Gal4 insertions, once in place, can then be easily exchanged for Split Gal4 components using ΦC31. A modified version of the TGEM construct called CRIMIC, which has been designed for easy excision from the genome, is currently being incorporated into several thousand additional Drosophila genes by the GDP and will eventually permit Split Gal4 lines to be generated for most genes in the Drosophila genome (Lee et al., 2018). The growing number of TGEM and CRIMIC lines, most of which are publicly accessible through the Bloomington Drosophila Stock Center (BDSC), represent a valuable resource for making Split Gal4 lines with gene-specific patterns of expression (Table 1).
One particular class of genes of interest to neuroscientists are those that establish the signaling capacities of neurons. The neurotransmitters and neuromodulators to which a neuron is responsive, together with those which it uses to communicate with other cells, are often among its defining features. The genes that determine these signaling properties encode the receptors for specific neurotransmitters or neuromodulators, in addition to neuropeptides and enzymes required for the biosynthesis and transport of small molecule transmitters. Collections of driver lines that use T2A to couple Gal4 expression specifically to genes important in neurotransmission and neuromodulation have recently been made by two laboratories and represent additional important resources for those interested in implementing the Split Gal4 method (Deng et al., 2019;Kondo et al., 2020, Table 1). Both collections consist of lines in which the Gal4 coding sequence is fused to the 3 end of a native gene encoding a signaling-related molecule via the T2A coding sequence. Using vectors made by Kondo et al. (2020), Gal4 can be exchanged for Zip − -Gal4DBD and Zip + -p65AD by the sequential action of ΦC31 and the recombinase, Cre (Figure 4A, bottom).
Converting Enhancer-Trap Gal4 Drivers to Split Gal4 Hemidrivers
Early efforts to map neuronal circuits in the fly by targeted manipulations of activity relied on collections of Gal4 enhancer-trap lines made by P-element transgenesis. Because P-element integration occurs preferentially in the 5' upstream region of genes in enhancer-rich regions, the expression of Gal4 constructs placed at these sites tends to reflect, albeit imperfectly, the expression of nearby genes (Spradling et al., Gene-trap drivers typically couple Gal4 expression to that of a native gene using T2A peptides (see text). T2A-Gal4 constructs are inserted either intronically, as in TGEM or CRIMIC lines (top), or just before the stop codon in the coding sequence (bottom). In both cases, the inclusion of either attP sites or an attP and a loxP site flanking the inserted constructs permits conversion of the Gal4 line into a Split Gal4 hemidriver. A Zip + -AD or Zip − -Gal4DBD donor construct with complementary flanking attB sites (or an attB and loxP site) can be substituted for Gal4, as indicated. (B) Top panels: the CRISPR/Cas-based HACK method (Lin and Potter, 2016;Xie et al., 2018) can be used to convert arbitrary enhancer-trap Gal4 drivers into Split Gal4 hemidrivers using a universal donor construct. The HACK donor construct has a T2A-Zip + -AD or -Zip − -DNA-BINDING (DBD) sequence flanked by homology arms (H1 and H2) taken from the Gal4 coding sequence. Also, the donor construct has an expression module for guide RNAs targeted to sites in the Gal4 sequence separating H1 and H2. Cas9-mediated cleavage of Gal4 at these sites, followed by homology-directed repair inserts the desired T2A-Split Gal4 construct in-frame into the-now broken-Gal4 sequence. Bottom panels: Enhancer-trap Gal4 lines made using the inSITE system (Gohl et al., 2011) can be converted into Split Gal4 hemidrivers by a series of genetic crosses. The system uses a set of three recombinases (Flp is omitted from the figure for simplicity) to substitute the desired Split Gal4 construct for Gal4 at the site of insertion. (C) The sparsely-expressing Gal4 lines made by the Rubin and Dickson labs (Jenett et al., 2012;Kvon et al., 2014) use enhancer fragments with defined sequences (CRMs) to drive Gal4 expression in specific patterns. The CRM-Gal4 constructs are also inserted into defined attP landing sites. (Continued)
FIGURE 4 | Continued
A Split Gal4 hemidriver corresponding to a given CRM-Gal4 driver can thus be made by inserting into the identical genomic site (e.g., attP) a construct that uses the same CRM (here "CRM1") to drive a Zip + -p65AD or Zip − -Gal4DBD construct instead of Gal4. As Dionne et al. (2018, #114) caution, insertion of the Split Gal4 construct into other genomic sites may lead to deviations from the original expression pattern. The legend indicates symbols used for various DNA motifs.
1995). Comprehensively characterized Gal4 enhancer-trap collections, such as the NP collection made by Hayashi et al. (2002), which consists of some 4,000 Gal4 lines with 3,825 distinct, mapped genomic insertion sites, thus represented a resource for sampling a wide variety of cell types. Cell groups with desired anatomical or functional properties could be identified in such lines by a variety of methods, including activity manipulations performed with UAS-TNT, UAS-Shi ts1 , UAS-TrpA1, and other effectors (Gohl et al., 2017;Martin and Alcorta, 2017). In large-scale screens of such lines, the effects of activity manipulations on behavior could be observed and in some cases, the behavioral effects could be mapped to particular neurons (Kohatsu et al., 2011;Flood et al., 2013). Because the expression patterns of most enhancer-trap lines are quite broad, often encompassing many thousands of neurons, additional methods are typically required to restrict the original pattern to smaller subsets of cells. A general method for converting a Gal4 line with expression in cells of interest into a Split Gal4 line involves the homology assisted CRISPR knock-in (HACK) method developed by Lin and Potter (2016; Figure 4B, top) This method uses a Gal4-specific guide RNA (gRNA) to introduce a Cas 9-mediated double-strand break into the middle of the Gal4 sequence. Donor constructs flanked by Gal4 homologous sequences can then be introduced at the breakpoint by homology-assisted repair. If these constructs are preceded by a T2A sequence, in-frame with the Gal4 sequence at the breakpoint, the new construct will be expressed and translated in addition to a truncated fragment of the original Gal4 molecule. By making transgenic flies bearing the donor construct together with the Gal4 gRNA, the replacement of Gal4 by an alternative construct can be effected in vivo by a series of genetic crosses. Flies bearing donor constructs for the Zip − -Gal4DBD and Zip + -p65AD were introduced by Xie et al. (2018) to permit HACK-mediated conversion of arbitrary Gal4 drivers of interest into Split Gal4 hemidrivers with equivalent expression patterns.
An alternative to converting existing enhancer-trap Gal4 lines into Split Gal4 lines was developed by Gohl et al. (2011) who generated instead a new and large collection of enhancer-trap lines made with a novel Gal4 expression cassette ( Table 1). This cassette could be exchanged using ΦC31 and two additional recombinases for any of a variety of alternative cassettes encoding other transcriptional regulators, including Zip − -Gal4DBD, Zip + -Gal4AD, and Zip + -VP16AD (Figure 4B, bottom). Gal4 enhancer-trap lines made using this ''integrase swappable in vivo targeting element'' (inSITE) can be screened similarly to other Gal4 enhancer-trap collections to identify lines of interest, which can then be converted into Split Gal4 hemidrivers with equivalent expression patterns in a straightforward manner.
Libraries of Lines Made With Molecularly Defined Cis-Regulatory Modules (CRMs)
Although certain genes are expressed in the nervous system in relatively restricted patterns, most are expressed in many-often many hundreds or thousands of-cells. Co-opting the genetic regulatory elements governing their expression using gene-specific or enhancer-trap methods thus typically results in patterns that are quite broad. To produce Gal4 lines with sparser expression patterns that are more suitable for mapping neural circuits, the laboratory of Gerald Rubin pioneered an alternative strategy (Pfeiffer et al., 2008). Selecting 925 genes expressed in the adult fly brain, they generated 5,200 DNA fragments, each spanning about three kilobases of sequence upstream or downstream of these genes or covering larger introns. These fragments (called cis-regulatory modules, or CRMs) typically contain one or more enhancers, which can be used to drive Gal4 expression when combined with a synthetic promoter. Gal4 driver lines were generated by inserting such constructs into the attP2 landing site on the Drosophila 3 rd chromosome using ΦC31. The large majority of these drivers showed expression in the nervous system and on average exhibited expression in the central brain in fewer than 100 neurons. Expanding on the success of this strategy, Jenett et al. (2012) established a collection of 7,000 CRM lines (so-called ''GMR'' or ''Generation 1'' lines) in which Gal4 expression was driven by neural enhancer fragments with defined sequence from 1,200 genes. Importantly, these authors also extensively characterized the central nervous system expression of 6,650 of the lines by confocal microscopy and annotated the patterns for anatomical features using machine-assisted methods. A similar effort by the laboratories of Barry Dickson and Alexander Stark at the Institute of Molecular Pathology in Vienna generated some 8,000 Gal4 lines (''Vienna Tiles,'' or VT lines) using 7,705 CRMs (Kvon et al., 2014). Initially, characterized by their embryonic expression patterns, a subset of 2,800 lines with restricted expression in the male brain were subsequently imaged using the same methodology described by Jenett et al. (2012). The curated, searchable images of the CNS expression patterns of the GMR and VT lines have been made publicly available via the FlyLight Project at the Janelia Farm Research Campus (Table 1).
Because the expression pattern of any GMR or VT line is dictated by the CRM used to express Gal4, the CRM can be repurposed to drive the expression of Split Gal4 components in the same pattern, as long as the Zip − -Gal4DBD or Zip + -AD construct is introduced back into the same landing site as the original Gal4 construct ( Figure 4C). In this manner, Split Gal4 hemidrivers that target the cells lying at the intersection of two overlapping Gal4 expression patterns can be generated. Identifying CRMs that are likely to give an overlapping expression of Split Gal4 hemidrivers in cell types of interest has been greatly facilitated by the development of image registration and analysis tools, such as the color depth ''MIP mask'' tool (Otsuna et al., 2018), the Neuroanatomy Toolbox (Bates et al., 2020), and the recently released NeuronBridge software (Meissner et al., 2020). Such software tools can be used to align, compare, and search for similar expression patterns from confocal Z-stacks. As illustrated by the examples described in the next section, this procedure has facilitated the selection of suitable CRMs for the production of many Split Gal4 hemidrivers, which have been used in combination to target particular cell types of anatomical or functional interest.
In addition to the many Split Gal4 ''drivers'' (i.e., specific combinations of hemidrivers that target a cell group of interest) that have been generated in the pursuit of particular biological questions, both the Rubin and Dickson laboratories have produced large libraries of Zip − -Gal4DBD and Zip + -p65AD lines to serve as building blocks for generating further Split Gal4 pairs of interest (Dionne et al., 2018;Tirian and Dickson, 2017 , Table 1). Together, the two groups have made approximately 4,000 Zip − -Gal4DBD lines and 3,000 Zip + -p65AD lines, which have been deposited at the Bloomington Drosophila Stock Center for public distribution. To facilitate genetic pairing of the Split Gal4 components, all Zip − -Gal4DBD stocks have transgene insertions on the 3 rd chromosome at the attP2 ΦC31 landing site, while all Zip + -p65AD stocks have insertions on the 2 nd chromosome at attP40. Dionne et al. (2018) describe a pipeline for rationally generating Split Gal4 drivers that target cell types of interest from the lines in these collections. Also, these authors provide useful guidelines and notes of caution. Based on the collective experience of several groups working with the FlyLight lines at the Janelia Research Campus and approximately 20,000 crosses, they note that highly specific intersections that include only the target cells of interest are a rarity occurring in no more than 5% of cases. However, it is not uncommon to generate multiple sparse intersections the only common element of which is the neurons of interest. In this manner, they state that it should be ultimately possible to generate relatively specific lines for three-quarters of the neurons in the adult fly brain using these methods.
APPLICATIONS OF THE SPLIT GAL4 SYSTEM IN NEURAL CIRCUIT MAPPING
In the nervous system, as in all of biology, form and function are tightly coupled. The shapes of different neuronal cell types-where their processes go and what kinds and numbers of contacts they make with other cells-are closely related to the type of information they process and pass on. In facilitating the study of individual cell types, the Split Gal4 method has made critical contributions to studies of both the architecture and operation of the fly nervous system. Indeed, the principal contribution of the Split Gal4 system has been to provide a bridge between the classical disciplines of neuroanatomy and neurophysiology. By enabling the reproducible targeting of the same cell type in different animals, Split Gal4 allows researchers to move seamlessly between analysis of a cell's connectivity and activity. For some problems, connectivity may provide the most natural entry point-if, for example, one wants to understand what type of information is processed in a particular brain region. In this case, it is important to know which neurons supply input to and carry output from that region, as well as the connectivity of local interneurons. For other problems, a neuron's anatomy and connectivity may not be of interest initially-as when one wants to understand which neurons govern a particular behavior. In this case, first identifying the functionally relevant neurons is paramount, and piecing together their interactions with each other may be secondary.
The following sections illustrate applications of the Split Gal4 system to problems of both of these types. On the physiological side, the Split Gal4 system has allowed neurons to be targeted so that their activity can be characterized or manipulated in different contexts. Information derived from such experiments is indispensable to understanding whether and how particular neurons contribute to circuit-level function and behavior. On the anatomical side, the Split Gal4 system has facilitated the mapping of dendritic and axonal projections of individual neurons. When done comprehensively for the neuronal types in a particular brain region, this has helped reveal the design principles governing operations of the fly nervous system from motion detection to memory.
From Anatomy to Function: Split Gal4 in Drosophila Systems Neuroscience Nervous systems are compartmentalized into areas of specialized function that are characterized by the inputs they receive from, and the outputs they send to, other parts of the brain or body. The neurons that receive and send these distinct signals necessarily have morphologies evolved to serve this purpose and defining neuronal cell types according to their morphology and position in the nervous system has been an essential feature of neuroscience research from the time of Cajal. Anatomical methods have enjoyed a renaissance in Drosophila since the introduction of the Gal4-UAS system. Recombinasebased methods for stochastically labeling single cells, such as MARCM (Lee and Luo, 1999) and Flp-Out Gal80 (Gordon and Scott, 2009), made it possible to parse Gal4 expression patterns and anatomically catalog the cell types of particular parts of the brain (see for example Jefferis et al., 2007). More recently, the introduction of CRM Gal4 lines and methods such as Flybow (Hadjieconomou et al., 2011), Drosophila Brainbow (Hampel et al., 2011), and Multi-color Flp-Out (MCFO; Nern et al., 2015), which permit individual neurons in a pattern to be differentially labeled by distinct fluorescent markers, has further enabled anatomical characterization of specific brain structures (see Wolff et al., 2015). To provide a framework for organizing the emerging knowledge from such studies, a standardized nomenclature for fly neuroanatomy was created in 2014 by Ito et al. (2014).
While anatomical methods may provide essential clues about the functions of individual neurons, they must be supplemented by experimental manipulations to establish what roles a given neuron plays. By permitting first the anatomical, and then the functional, characterization of specific cell types, Split Gal4 targeting methods are allowing just such questions to be answered for diverse parts of the fly brain. Spearheaded largely by the efforts of the Rubin lab at the Janelia Research Campus and their collaborators, several collections of anatomically selective, stable ''Split Gal4 drivers'' have been created (Figure 5A). These drivers-each of which consists of a particular pair of Zip − -Gal4DBD and Zip + -AD hemidrivers combined in a single fly-can be used to systematically target cell types innervating parts of the optic lobe (Tuthill et al., 2013;Wu et al., 2016;Davis et al., 2020), the central complex (Wolff and Rubin, 2018), the lateral horn Frechter et al., 2019), and the mushroom body (Aso et al., 2014a,b). Also, a large collection of Split Gal4 drivers has been generated that targets neurons with somata in the brain and descending projections to motor processing regions of the ventral nerve cord . Together, these collections have provided key insights into how the fly nervous system processes visual information, forms and express associative memories, exercises and maintains sensorimotor control, and processes innate behavioral responses to odors. Although a detailed description of the landmark articles that introduced each of these collections is well beyond the scope of this review, the nature, and importance of each collection will be briefly discussed, with a particular focus on the collections that cover the lateral horn and mushroom body.
Visual System Split Gal4 Drivers: the Optic Lobe
The first collection of cell-type-specific Split Gal4 drivers to be made targeted each of the 12 non-photoreceptor cell types innervating the lamina, the first of four visual neuropils in the optic lobe (Tuthill et al., 2013). This collection differs somewhat from the others described here, in that the anatomy of all but one of the 12 cell types targeted had been well-described by classic Golgi studies (Fischbach and Dittrich, 1989) and electron microscope reconstructions (Meinertzhagen and O'Neil, 1991;Rivera-Alba et al., 2011). However, the individual functions of these cell types in motion detection-a key aspect of visual processing in which the lamina was thought to have a critical role-was largely unknown and activity suppression experiments by Tuthill et al. (2013) using the cell-type-specific drivers established that four of the 12 lamina neuron types contributed to this process. These lines have subsequently been used in over a dozen studies that have refined these original results (Borst et al., 2020).
A major output region of the optic lobe that has been analyzed using Split Gal4 methods is the lobula. Projection neurons (VPNs) from this area convey processed visual information to other parts of the brain and Wu et al. (2016) created a set of VPN-specific Split Gal4 drivers that individually target each of 22 distinct lobula columnar cell types. Over half of these were unknown from previous work. Using the VPN-specific drivers, the authors characterized the response properties of all 22 cell types to visual stimuli, mapped their projections to areas within the central brain, and analyzed the behavioral consequences of their activation. More recently, these neurons-together with the 12 lamina cell types and 42 additional optic lobe neuronal cell types selectively targeted by newly developed Split Gal4 drivers-have been subjected to comprehensive transcriptomics analysis to determine their mechanisms of intercellular signaling (Davis et al., 2020). Analyzing certain cell types known to be synaptically connected from EM reconstructions of the optic lobe, the latter study revealed the previously unrecognized use of acetylcholine, rather than histamine, as a neurotransmitter at certain photoreceptor synapses. In this way, Split Gal4 methods have served not only to define cell types of anatomical interest, but to bridge neuroanatomical detail to neurophysiology, connectomics, and genomics.
Split Gal4 Drivers for the Central Complex and Descending Neurons
One important recipient of visual information in the fly brain is a structure consisting of several neuropils collectively called the central complex (CC). The CC contributes to a wide range of behaviors related to the animal's orientation in space and has FIGURE 5 | Targeting cell types of anatomical interest. (A) Colored regions indicate structures within the Drosophila CNS for which comprehensive libraries of cell type-specific Split Gal4 drivers have been made. Targeted cell types include those of the medulla (blue) and lobula (red) in the optic lobes. In the central brain, cell types of the mushroom body (MB; light green), lateral horn (LH; light brown), and parts of the central complex (magenta) have been targeted. Also, over 100 Split Gal4 drivers have been made that target diverse neuron types that send descending projections from the brain into the ventral nerve cord (dark green). See text and Table 1 for references and details. (B) Examples of Split Gal4 drivers that target MB neurons. One driver, PPL1-α3(2), labels two dopaminergic input neurons (magenta) with axonal projections to the α-lobe. The MBON-α3(2) driver labels two output neurons (green) with dendritic fields in the α-lobe. The overlapping expression appears light blue and MB lobes are shaded lightly in white. Inset: the five lobes of the MB formed by Kenyon Cell axons. α, α' lobes, vertical blue and yellow, respectively; β, β' lobes, horizontal blue and yellow, respectively; γ lobe, orange. The α3 compartments of the α-lobe, which is targeted by both the PPL1-α3 and MBON-α3 processes are outlined. recently been shown to form an explicit representation of a fly's directional ''heading'' in a structure called the Ellipsoid Body, which receives inputs from neurons in the Protocerebral Bridge (PB; Seelig and Jayaraman, 2015). A comprehensive collection of Split Gal4 drivers targeting neurons of the PB, together with two other CC neuropils, has recently been characterized by Wolff and Rubin (2018) and is complemented by an additional small set of functionally characterized CC Split Gal4 drivers described by Franconville et al. (2018).
The integration of sensory information by structures such as the CC must eventually be conveyed to motor processing areas and translated into behavior. Because behavior requires movements of the legs and wings, which are located on the thorax and are controlled by neurons in the thoracic ganglia of the ventral nerve cord, information must be transferred from areas in the brain to these ganglia. The neurons responsible for this transfer (so-called descending neurons, or DNs, see Figure 5A) typically have their cell bodies and dendritic arbors in the brain and axons that pass through the neck and terminate in the VNC. A comprehensive collection of 133 Split Gal4 drivers targeting 54 distinct types of DNs was made and anatomically characterized by Namiki et al. (2018). In distinction to the other cell-type-specific Split Gal4 collections described above, which typically label multiple morphologically similar cells, the DN collection consists of lines that typically label a single pair of bilateral DNs. The pioneering study describing these lines provided a detailed map of the leg and wing motor neuropils of the thoracic ganglia to which the different DNs project. Also, it described a novel integrative region between these two neuropils that receives input from a broad range of brain areas and may control both sets of appendages. An accompanying study by Cande et al. (2018) characterized the behavioral effects of individually activating the DNs in which these lines express. Their assay was biased against the observation of flying movements but found many neurons that selectively produced walking, tapping, reaching, or grooming phenotypes.
Olfactory System Split Gal4 Drivers: Mushroom Body and Lateral Horn
In addition to the systems that govern visual and motor processing in the fly, the olfactory system has also been a major focus of study. The last two Split Gal4 driver collections to be discussed here were created to elucidate the cellular basis of olfactory processing in two distinct brain areas known for their very different handling of odor and pheromone information. Both regions receive input from projection neurons of the antennal lobe, the second-order processing station of the olfactory system, but one region, called the mushroom body (MB), transforms this input into context-dependent memories that permit flexible, experience-dependent responses to odors, while the other, called the lateral horn (LH) appears to encode stereotyped, innate responses to odors. Although the MB had attracted intense interest because of its role in associative learning, the connectivity of its component neurons was only partially characterized until Aso et al. (2014a) created 92 Split Gal4 drivers that comprehensively described the essential MB cell types (Figures 5B,C). Similarly, knowledge of the cellular composition of the LH was fragmentary before two recent Split Gal4-based analyses that enumerated and characterized its neuronal composition Figures 5D,E;Dolan et al., 2019).
The MB consists of three basic cell types: a large number of Kenyon cells (KCs) which receive randomly distributed input from olfactory PNs; MB output neurons (MBONs) which receive synaptic input from the KCs and broadly translate it into the approach or avoidant behaviors; and dopaminergic neurons (DANs), which modulate the KC-MBON synapses. Mutant studies initiated in the laboratory of Martin Heisenberg in the late 1970s had suggested an elegant model of MB connectivity (Heisenberg, 2003) the basic details of which were decisively confirmed and considerably refined by Aso et al. (2014a,b) in two sweeping and insightful studies. These studies bared the basic logic of MB operations by precisely defining the inputoutput relations of individual DANs and MBONs using Split Gal4 lines (Figure 5C). The MB consists of distinctive ''lobes'' formed by the KC axons, and the two studies showed that these lobes are parcellated into 15 compartments, each of which is occupied by the dendrites of 1-4 specific MBONs and the synaptic terminals of similar numbers of specific DAN cell types. Because DAN activity, in general, reflects the rewarding or aversive impact of environmental conditions, and because DANs modulate the strength of KC-MBON synapses, rewards and punishments become associated with particular odors by activity within the MB network. The broad influence of this work can be recognized in the fact that the collection of MB Split Gal4 drivers has been used in at least 30 subsequent studies to date, and the basic model of MB network function has attracted attention from a range of researchers including those working at the interface of neuroscience and artificial intelligence (Srinivasan et al., 2018).
Whereas the Split Gal4 investigation of MB circuitry was anticipated by work carried out with Gal4 enhancer-trap lines (Tanaka et al., 2008), the second major processing area for olfactory information, the LH, had proved relatively resistant to such approaches. The Jefferis laboratory, therefore, took a two-pronged approach to mapping and characterizing LH neurons. On the one hand, they generated a large set of Zip − -Gal4DBD and Zip + -VP16AD enhancer-trap lines, from which they selected 234 hemidriver pairs with distinctive expression in LH neurons that could be used for physiological characterization . On the other hand, they generated 210 stable Split Gal4 drivers based on anatomical screening of the GMR and VT library lines that collectively expressed in 82 distinct LH cell types . Fifty-three of these cell types were specifically labeled by individual Split Gal4 drivers and included local (LHLN), input (LHIN), and output (LHON) neurons ( Figure 5D). Using these two approaches, the authors demonstrated that the LH is considerably more diverse in cell-type composition than the MB. In contrast to MB cell types, individual LH cell types generally displayed stereotyped response profiles to odorants consistent with the genetic-as opposed to experience-dependent-encoding of olfactory information by the LH. Interestingly, LH output neurons, which have long been thought to play an important role in innate behavioral responses to odors and pheromones were found not to project directly to motor processing areas. However, a significant fraction (∼30%) had processes that overlapped significantly with those of DANs or MBONs, suggesting that an interplay between innate and learned responses to odors might be critical in interpreting olfactory information (Figure 5E).
Functional Screens: Split Gal4 Mapping of Neural Circuits Governing Behavior
The use of Split Gal4 methods to study the neural circuits that govern behavior has its roots in the systematic screens initiated by Seymour Benzer's lab in the 1960s to identify genetic mutants with behavioral deficits. These screens inspired the subsequent cell-based screens conducted using enhancer-trap Gal4 lines mentioned above. Just as genetic screens required the subsequent identification of the actual mutation that caused a behavioral deficit, so enhancer-trap methods required isolation of the actual neurons within a Gal4 pattern that caused a behavior change when blocked. The Split Gal4 method was introduced precisely to permit such refinement of a Gal4 expression pattern and its considerable utility in this regard has been demonstrated in numerous behavioral screens conducted with the collections of GMR and/or VT Gal4 lines. Among these are studies that have successfully identified and/or characterized neural substrates of: grooming (Hampel et al., 2015, walking (Bidaye et al., 2014;Robie et al., 2017;Sen et al., 2017Sen et al., , 2019, gap-crossing (Triphan et al., 2016), male aggression (Hoopfer et al., 2015;Watanabe et al., 2017;Duistermars et al., 2018;Jung et al., 2020), female mating receptivity (Feng et al., 2014), egg-laying (Shao et al., 2019;Wang et al., 2020), circadian rhythms (Guo et al., 2017;Liang et al., 2019;Sekiguchi et al., 2019) and sleep . Increasingly, the Split Gal4 method is being integrated into powerful circuit-mapping pipelines that employ high-throughput screening methods in which behavioral analysis is facilitated by machine learning and other computational approaches (Dankert et al., 2009;Anderson and Perona, 2014;Robie et al., 2017;Cande et al., 2018).
Split Gal4 Dissection of the Circuit Governing Backward Walking and Crawling
An instructive example of how Split Gal4 is facilitating circuitmapping studies comes from the study of backward walking in the fly. Flies, like other animals, can respond to obstacles and potential threats by reversing their direction of locomotion. This reversal, however, does not simply invert the sequence of leg movements of the tripod gait normally used for forward walking, but instead invokes less coordinated waves of backward leg movements, first on one side and then the other. How the nervous system generates this novel pattern was unknown until the laboratory of Barry Dickson began investigating it in a series of elegant studies beginning in 2014 (Figures 6A-C). In a behavioral screen of 3,460 VT Gal4 lines, Bidaye et al. (2014) identified four lines that when activated caused flies to walk backward, a phenotype they dubbed ''moonwalker.'' One line in particular (VT50660), exhibited consistent backward walking when the neurons in which Gal4 was expressed were activated. Conversely, when the activity of these neurons was suppressed, flies failed to reverse direction when confronting a dead end in a linear track.
The VT50660 expression pattern includes seven distinct cell types, two of which were implicated in backward walking by stochastic methods of neuronal activation. Generation of Split Gal4 hemidrivers from VT50660 and several other VT Gal4 lines with expression in these neurons allowed the authors to selectively manipulate each cell type separately. They found that a single pair of neurons with cell bodies in the brain and projections to the ventral nerve cord (''moonwalker descending neurons,'' MDN; Figure 6B) was responsible for the moonwalker phenotype, but that the second pair with cell bodies in the VNC and projections to the subesophageal zone (moonwalker ascending neurons, MAN) facilitates backward walking, apparently by inhibiting the program for forward walking. A subsequent high-throughput neuronal silencing screen by the Dickson lab assayed several thousand VT Gal4 and Split Gal4 driver lines for animals impaired in backward walking when confronting a dead end (Sen et al., 2019). Reversal of walking under this condition is thought to depend, in part, on mechanosensitive neurons activated by contact with the barrier and indeed the screen produced one Split Gal4 driver, which the authors named ''Two Lumps Walking.'' This driver included ascending neurons with arbors in the mesothoracic ganglia and projections that overlapped with those of the MDNs. Anatomical screening of the VT Gal4 collection identified a line with expression in these particular neurons, but not in other neurons present in the original Split Gal4 line. By combining hemidrivers generated from this line with hemidrivers from the original line, the authors were able to selectively label and manipulate the activity of the ascending neurons (Two-lumps Ascending, TLA) and show that they mediated mechanosensitive input to the MDNs to govern reversal of walking (Figures 6B,C).
In an example of how Split Gal4-based circuit mapping approaches can productively synergize, Wu et al. (2016) in their analysis of lobula columnar neurons identified a subset (LC16) that also triggered a moonwalker-like phenotype when activated. Although the LC16 and MDN neurons do not have synaptic contacts, Sen et al. (2017) showed that activation of LC16 neurons is sufficient to activate the MDNs and that silencing of the latter neurons blocks the moonwalking phenotype elicited by stimulation of the LC16 neurons. Because the LC16 neurons are thought to mediate visual responses to looming, these studies collectively indicate that the MDN neurons act as central coordinators of evasive locomotor responses to both visual and mechanosensory input ( Figure 6C). Although the manner in which the MDNs act on motor circuits to induce backward walking in adults remains to be characterized, significant progress on this issue has been made in the larva, where the same neurons are present and have been shown to induce a backward crawling (''mooncrawler'') phenotype (Carreira-Rosario et al., 2018).
As in the adult, the observation that the polarity of larval crawling could be reversed by activation of specific neurons came from a screen of CRM Gal4 lines. Split Gal4 refinement produced three different lines with the mooncrawler phenotype that overlapped in expression only in a small complement of bilateral descending neurons. Using the KZip + to eliminate unwanted expression in the VNC and a stochastic labeling strategy to isolate other neurons within the pattern, Carreira-Rosario et al. (2018) were able to show that two pairs of the descending neurons were responsible for the mooncrawler phenotype ( Figure 6D). Using the morphological features of the larval MDNs as guides, they were able to identify them in electron micrograph reconstructions of the larval connectome and map their connectivity. Paired activity manipulation/monitoring experiments of the MDNs and distinct subsets of downstream neurons allowed the authors to demonstrate that the MDNs exert two fundamental actions on the locomotor circuit: they directly activate an excitatory premotor neuron important for backward crawling (A18b; Figure 6E) and simultaneously inhibit the forward crawling circuitry via disynaptic inhibition of a second excitatory premotor neuron (Figures 6D,E). Although details of MDN connectivity must necessarily differ in the adult-where the motor circuitry is housed in the thoracic, rather than abdominal, ganglia and governs movement of the legs rather than the body wall-the fact that both the larval and adult circuits share an essential ''commandlike'' element (i.e., MDN) suggests that common principles apply to the governance of backward locomotion at both developmental stages. The identification of this element in addition to major sensory inputs and motor outputs within the space of 5 years is also testimony to the power that Split Gal4 methods lend to modern strategies for circuit-mapping in the fly.
Split Gal4 Synergies With Connectomics: Larval Neural Circuits
Recent progress in single-cell transcriptomics and electron microscopy (EM) is defining cells of the nervous system with unprecedented granularity. As these methods permit the discrimination of ever more refined categories of neurons based on their patterns of gene expression or their connectivity, the Split Gal4 method has assumed increasing importance as a way to examine the function of new types of neurons. The investigation of functionally interesting neurons discovered using Split Gal4 has conversely benefited from the consummate neuroanatomical detail afforded by recent EM reconstructions. Researchers have been able to leverage connectomics data to identify not only the immediate synaptic partners of the neurons they have identified but also other parts of the circuits in which they participate. The value of combining Split Gal4 and EM data was already evident from early studies. Targeted manipulations of neurons downstream of UV-sensitive photoreceptors together with serial-section EM reconstruction of the fly medulla established the Dm8 amacrine neurons as the substrates governing flies' attraction to UV light (Gao et al., 2008;Meinertzhagen, 2018). However, the explicit interplay of Split Gal4 targeting and connectomics data has more recently been fostered by the ambitious goals of Janelia's FlyLight and FlyEM projects. These projects aim to produce Split Gal4 reagents for investigating most of the cell types in the fly brain, while also providing a complete map at a synaptic resolution of the entire fly nervous system. The benefits of this combined approach can be seen in work that incorporates data from EM reconstructions of the optic lobe (Shinomiya et al., 2019) and mushroom body alpha lobe (Takemura et al., 2017), as well as the just-completed adult ''hemibrain'' (Zheng et al., 2018;Wang et al., 2020). Nowhere is the value of bootstrapping EM and Split Gal4 data more evident, however, than in studies of the larval nervous system.
The small size and numerical simplicity of the larval nervous system made it an attractive candidate for EM reconstruction, a task which was spearheaded by Albert Cardona's group and has been carried out in collaboration with a variety of researchers interested in different aspects of larval behavior. As in the adult, a major focus of investigation in the central brain has been the MB. Using 12 specific Split Gal4 drivers, Saumweber et al. (2018) functionally characterized a subset of DANs and ∩ V002081 Gal4DBD Split Gal4 driver was identified in a synaptic suppression screen to identify neurons involved in larval chemotaxis. This driver labels a single pair of descending neurons (PDM-DN). (B) Representation of the EM reconstructed neurons in the circuit for larval chemotaxis. Connectomics analysis revealed that the PDM-DN receives input from two lateral horn neurons, LH-LN1 and LH-LN1 (light and dark purple, respectively) and innervate three neurons in the SEZ, one of which is shown (SEZ-DN1, blue). The LH neurons are downstream of unpaired olfactory projection neurons (PN, orange) that receive input from Or42a and Or42b olfactory receptor neurons (yellow). The SEZ-DN1 neuron is the same SEZ neuron identified downstream of the larval "mooncrawler" neurons (i.e., Pair1) and connects to the posterior A27 h premotor neurons (teal). (C) Although certain details remain to be determined, such as the identity of the PNs that innervate the LH neurons and the functional interactions of the LH and PDM-DN neurons, the identified components of the larval chemotaxis circuit span the entire neuraxis from the sensory periphery to the final common pathway of the motor neurons [Adapted from Tastekin et al. (2018)].
MBONs in the 3 rd larval instar MB, incorporating anatomical insights drawn from a nearly complete EM reconstruction of this structure in the first larval instar (Eichler et al., 2017). Another class of larval circuits whose investigation has benefited from combined EM and Split Gal4 analysis are the dense sensorimotor networks that regulate forward and backward locomotion (Heckscher et al., 2015;Carreira-Rosario et al., 2018;Kohsaka et al., 2019) as well as responses to mechanosensory stimuli (Ohyama et al., 2015;Jovanic et al., 2016Jovanic et al., , 2019. Interestingly, a screen of approximately 300 Split Gal4 drivers to identify neurons required for tracking odor plumes (Tastekin et al., 2018) labeled a pair of descending neurons (PDM-DN; Figures 7A,B) with anatomy and connectivity similar to that of the mooncrawler neurons described above (Carreira-Rosario et al., 2018). Connectomics analysis revealed that the PDM-DN neurons synapse downstream on an inhibitory neuron in the SEZ also targeted by the MDNs (''Pair 1'' in Figure 6D, SEZ-DN1 in Figures 7B,C), which block forward locomotion. It does so by inhibiting the activity of a specific subset of posterior premotor neurons known as A27 h (Figures 6E, 7B,C), which had been previously implicated in forward peristalsis and shown to connect to known motor neurons (Fushiki et al., 2016). On the upstream side, the PDM-DNs were shown to receive prominent input from two LH neurons (LH-LN1/2; Figures 7B,C). These neurons are downstream of identified olfactory projection neurons that mediate responses to odors detected by known olfactory receptor neurons. Remarkably, this means that the basic connectivity of at least one larval sensorimotor circuit has been characterized by the neurons that mediate an initiating sensation to the motor neurons that mediate part of the behavioral response.
OTHER AREAS OF APPLICATION AND SPIN-OFFS
While the Split Gal4 method has been primarily embraced by researchers interested in elucidating neural circuits in the fly brain, its range of potential applications is much broader. Within neuroscience, the areas of neurodevelopment and neuromodulation have both benefited from the application of Split Gal4 to certain problems and the use of the method will likely expand in these areas. Outside of neuroscience lies a largely unexplored domain of application, namely other tissues. With its extreme cellular diversity, the nervous system is the tissue most obviously in need of combinatorial methods for isolating functionally and anatomically distinct cell types, but many other tissues are composed of different cell types that can be only incompletely isolated using binary targeting systems. Other binary systems-and other organisms-have also begun to benefit from adaptations of the Split Gal4 technology. In this section, we briefly review these emergent domains of Split Gal4 implementation.
Neurodevelopment
The successive restrictions of cell fate that give rise to neuronal cell types start before neurogenesis and proceed through a series of key developmental events including neurite elaboration and pathfinding, synaptic partner recognition, and sometimes neurite pruning and cell death. To distinguish the cell-autonomous and non-cell-autonomous mechanisms that guide each of these processes, it is often necessary to genetically mark and/or manipulate single cells. Not coincidentally, the first techniques for genetically labeling single cells, such as MARCM (Lee and Luo, 1999), were developed for use in neuroscience. A wide range of powerful genetic techniques for studying neurodevelopment has followed, particularly for use in neuronal lineage mapping (Yu et al., 2009;Awasaki et al., 2014;Ren et al., 2016;Garcia-Marques et al., 2019). Although the availability of these techniques has somewhat mitigated the need for Split Gal4, the latter method has also found productive application in the study of numerous developmental processes. These include neuronal differentiation (Seroka and Doe, 2019), target matching and synaptogenesis (Couton et al., 2015;Courgeon and Desplan, 2019;Menon et al., 2019;Xu et al., 2019), and neuron-glia interactions (Coutinho-Budd et al., 2017;McLaughlin et al., 2019;Shimozono et al., 2019). Also, the characterization of postembryonic neuroblast lineages has profited from the application Split Gal4 methods (Lacin and Truman, 2016;Lacin et al., 2020; Figures 8A-C). Split Gal4 lines generated using CRMs known to express in embryonic neuroblasts (Manning et al., 2012) have been used to permanently label early-born neuronal progeny. While these lines tend to show transient expression of the Split Gal4 components in neuroblast progeny, lines generated using the Trojan exon method and targeting transcription factor genes important for specifying neuronal identity have the advantage of exhibiting persistent patterns of expression of the Zip − -Gal4DBD and Zip + -p65AD components (Lacin et al., 2019).
Neuromodulation
Although synaptic signaling between neurons is of paramount importance, neurons also communicate through other channels. One of the most important of these uses not fast neurotransmitters, which directly regulate ionic conductances, but instead, molecules that act on slower timescales-often via G-protein coupled receptors-and over larger distances. These molecules, which include an assortment of factors from biogenic amines to neuropeptides, act to modulate synaptic signaling and are called neuromodulators. Specific neuromodulators play important roles in specifying behavioral and physiological states. Identifying the sources of these factors and their sites of action is therefore important to understanding the nervous system function. Mapping such patterns of neuromodulatory connectivity requires selectively targeting neurons that express specific neuromodulators or their receptors. Although Split Gal4 methods offer considerable promise in this endeavor, they have been used only in a small number of cases thus far.
One area where progress is most evident is in the study of molting. This developmental process is particularly reliant on the use of neuromodulators to control behavioral and physiological events (White and Ewer, 2014). Three hormones involved in this process, all of which act within the CNS as neuromodulators, are Ecdysis Triggering Hormone (ETH), Bursicon, and Crustacean Cardioactive Peptide (CCAP). Two of the first applications of Split Gal4 technology-both described above-were used to identify subsets of neurons that released CCAP (Luan et al., 2006) and Bursicon (Luan et al., 2012). More recently, neurons targeted by ETH, CCAP, and Bursicon have been categorized using Split Gal4 into subsets according to their use of different fast neurotransmitters (Diao et al., 2016(Diao et al., , 2017. The use of the Trojan exon method has considerably facilitated these efforts by permitting neurons that express the relevant hormone receptors to be selectively targeted by expression of Split Gal4 constructs. Further progress in mapping what might be called the ''neuromodulatory connectome'' should be facilitated by the libraries of lines described above that systematically target neurons expressing genes important for neuromodulatory signaling (Deng et al., 2019;Kondo et al., 2020, Table 1).
Targeting Cell Types in Non-neural Tissues
Much of the excitement surrounding the introduction of the Gal4-UAS method centered around its promise for studying the development of a wide variety of tissues. The 220 enhancer-trap lines generated by Brand and Perrimon (1993) expressed Gal4 in a wide range of embryonic cell types. Although the specificity of expression of these and subsequent Gal4 lines is sufficient to characterize different kinds of cells in many tissues, expression in a single cell type in a single tissue is often not possible because of the pleiotropic expression of most genes. However, combinatorial methods such as Split Gal4 largely remain to be exploited to achieve greater selectivity of expression. Emerging transcriptomics data for a wide range of tissues should make it possible to use the Split Gal4 toolbox to rationally generate lines that target particular cell types based on their expression of distinct genes. An alternative approach is to leverage the large numbers of GMR and VT lines to make Split Gal4 stocks for this purpose. Although many of the CRMs used to create these lines were selected based on their proximity to neuronally expressed genes, many such genes also express outside of the nervous system. The VT lines clearly express in diverse tissue types developmentally (Kvon et al., 2014), and a survey of the GMR lines shows that approximately one-fifth exhibit expression in imaginal discs, which give rise to adult appendages, sensory organs, and reproductive tissues (Jory et al., 2012). Thus, it is likely that these collections, and the Split Gal4 collections currently being generated from them, represent a valuable resource for targeting non-neural tissues.
Split Gal4 Spin-offs
Beyond specific applications, the Split Gal4 technology has also influenced the development of similar technologies for use in the fly and other genetic model organisms. Two similar split transcription factor systems-both using the same leucine zipper pair used in the Split Gal4 system-have been developed in Drosophila. These systems can be used to achieve refined expression of reporters or effectors under the control of either split LexA or split QF (Riabinina et al., 2019) transcriptional activators. Both can be used in conjunction with the Gal4-UAS system to simultaneously express different reporters/effectors in two distinct cell groups . Ting et al. (2011) also introduced a clever method for converting a Gal4 driver into a Split LexA hemidriver by making flies in which the Zip − -LexADBD transgene is placed downstream of the UAS (Figures 8D-F). In addition to these fly-based spin-offs, a ternary expression system based on the zebrafish optimized version of Gal, called ''Split KalTA4,'' has been shown to work in D. rerio (Almeida and Lyons, 2015) and a split QF system that uses an alternative pair of zippers has been demonstrated in C. elegans (Wei et al., 2012). In general, these systems have yet to gain the same traction as the Split Gal4 system.
CONCLUSION
As the examples above make clear, Crick's dream of being able to manipulate the activity of specific cell types in the brain has been realized in the fly. Enabled by Split Gal4 methods, such manipulations are defining the functions of a growing number of neurons. Coupled with the knowledge of how these neurons interact, which is rapidly becoming available from EM reconstructions of the larval and adult central nervous systems, Split Gal4 is yielding an increasingly comprehensive picture of the fly brain and how it operates.
In some ways, the success of the Split Gal4 method is remarkable. It implies that many cell types in the fly can be uniquely specified by the activity of only two enhancer domains. A critical question is whether this will prove true of the many neuronal cell types that remain to be characterized. It is worth noting in this regard that existing collections of Split Gal4 drivers, such as those for descending or lateral horn neurons, include only about one-third of the estimated cell types in their respective categories Dolan et al., 2019). Also, some current cell types defined by Split Gal4 line expression, such as the lobula columnar neurons and subclasses of MB Kenyon cells, include hundreds of morphologically similar neurons, which may yet yield to a further subdivision based on more subtle genetic and functional differences. The question of whether Split Gal4 technology will allow all neuronal cell types to be individually targeted is thus likely to hinge not only on technical issues but also on how stringent a definition of cell type one adopts. Nevertheless, there is a reason for optimism. First, the Janelia Research Campus, which has both underwritten and driven much of the recent technical progress in fly neuroscience, is continuing to generate further lines and has projected that current methods should allow Split Gal4 combinations to be made that cover 75% of all cell types in the adult brain. The coverage of neurons in the numerically simpler larval brain is likely to be better. Resources created to exploit the many thousands of enhancers represented in the GMR and VT collections will help distribute this effort (see for example Meissner et al., 2020), and methods for rationally identifying novel gene enhancers-or for making gene-specific Split Gal4 hemidrivers-may help realize a relatively complete catalog of Split Gal4 drivers. Where gaps persist and further specificity is required, further restriction using the Killer Zipper or other combinatorial strategies may also help (for examples see Pankova and Borst, 2017;Tison et al., 2019).
A more prosaic question is whether the burden of maintaining many thousands of Split Gal4 lines will represent an impediment to future progress. For stock centers reliant largely on user fees, it is expensive to maintain lines that are infrequently requested, as will generally be the case for cell-type-specific lines. A felicitous feature of the hemidriver lines generated using CRMs from the GMR or VT collections is that they can be regenerated by straightforward means and do not necessarily have to be maintained. The same is true of lines generated using Trojan exons, CRIMIC constructs, or similar methods using 2A peptides. Nevertheless, the cost and effort of remaking lines make alternative methods for sparse targeting of cells attractive, especially if they require maintenance of fewer lines. To date, no other methods have emerged that meet this requirement. The recently developed SpaRCLIn method has been proposed as an alternative to Split Gal4, but its efficacy and promise remain to be demonstrated (Luan et al., 2020).
An obvious lacuna in the Split Gal4 toolbox is the absence of a method for temporally-as well as spatially-restricting transcriptional activity. The standard method of constraining Gal4 activity to a particular time-window using the temperaturesensitive mutant of Gal80 cannot be used with current implementations of Split Gal4 as Gal80 does not bind dVP16AD or p65AD. Possibly the temporal control could be introduced into the Split Gal4 system using dimerization domains that make the association of the Gal4DBD and AD contingent upon light or a chemical inducer of dimerization (Taslimi et al., 2016;Huynh et al., 2020), but these solutions would require the creation of completely new lines. A more congenial solution would be to temporally control Split Gal4 activity by rendering expression (or activity) of the Killer Zipper contingent upon heat or drug binding, perhaps via a recombinase, but this has not yet been accomplished.
With these caveats aside, Split Gal4 methods are providing the means for remarkable advances in fly neurobiology. By providing reliable and reproducible genetic access to ever more neuronal cell types, Split Gal4 is enabling the assembly of a comprehensive parts list of the Drosophila brain, complete with information about the functions and interactions of these parts. The cornucopia of Split Gal4 lines already available and currently in production can be expected to keep fly neuroscientists busy for some time to come, and as the catalog of lines increases, we can only anticipate a deeper understanding of not only how the fly brain works, but how nervous systems in general help animals navigate the opportunities and risks of the world to promote survival and reproduction.
|
2020-11-09T14:09:04.289Z
|
2020-11-09T00:00:00.000
|
{
"year": 2020,
"sha1": "ec9f6a84a1e86a837465621fbb2d17a2090b3d7f",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fncir.2020.603397/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ec9f6a84a1e86a837465621fbb2d17a2090b3d7f",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
}
|
267252087
|
pes2o/s2orc
|
v3-fos-license
|
ENACTING THE GODS: THE PERFORMANCE OF HAOBA NURABI EPISODE IN THE LAI HARAOBA OF MANIPUR
This paper examines the performance of the Haoba Nurabi episode in the Meitei Lai Haraoba of Manipur. It attempts to delve into the intricate rituals of Lai Haraoba, a celebration that combines elements such as dance, music, sports, and sacred ceremonies to honour the presiding deities. The paper also provides insights into the performance space known as the laipung , shedding light on the staging and realisation of this traditional performance style. Throughout this paper, the terms drama, theatre
In a historic achievement, a theatre troupe from Manipur clinched their firstever victory outside their home state at the theatre festival/competition in New Delhi, which took place from November 22 to December 25, 1954.The festival was divided into four categories: modern, traditional, folk, and historical.It was meant for 15 languages which were included in the Eighth Schedule of the Constitution of India.Legend has it that Manipuri, a non-scheduled language at that time, was permitted to stage a play with the consent of the then Prime Minister Pandit Jawaharlal Nehru.Manipur Dramatic Union (MDU) represented Manipur and staged Sarangthem Bormani's "Haorang Leishang Shaphabi."The play was adjudged first position in the 'folk category.'It is said that the play enthralled both the audience and the judges alike.The play incorporated elements from Lai Haraoba or Umang Lai Haraoba, which is regarded as a propitiating festival/ceremony spanning many days of presiding deities intrinsic to the culture and tradition of Manipur.The above example is cited just to elaborate on how vital Lai Haraoba is in the discourse of Manipuri performance traditions.
The pursuit of spectacle, rituals, and ceremonies by the court of Manipur can be compared to the Balinese culture, which Geertz (1980) describes as a 'theatre state' where the "kings and princes were the impresarios, the priests the directors, and the peasants the supporting cast, stage crew, and audience" (p.2).The kingdom of Manipur loved spectacle and ceremonialism, and mass rituals ran deep in the court as well as the sacred grounds.So, when Modern/Western proscenium theatre made inroads in Manipur in the early Twentieth century via Bengali officials of the British Raj, the court and the people of Manipur accepted it wholeheartedly.
There was a sizable Bengali population in the present-day Babupara (formerly Haobam Marak) in the heart of Imphal.They celebrated Durga Puja, Saraswati Puja, Kali Puja, and other religious festivities, where they incorporated dramas in the Western format to entertain themselves.It would be noteworthy to mention that the Bengalis had their brush with Western theatre in 1795 with a short-lived play by Gerasim Lebedeff Sen (1960), p.175.They tried to replicate in Manipur what was prevalent in Bengal at the Puja pandals and makeshift spaces, but the first theatre with a proper stage was constructed at the residence of Bamacharan Mukherjee in 1903 with the establishment of a theatre team called Bamacharan Mukhopadhyay Bandhav Natyasala Singh (1980), p. 31.But it was hardly a shed rather than a theatre house.A theatre hall was then constructed with the patronage of Maharaja Churachand and the British Political Agent Colonel Shakespeare in 1905 and called it "Manipuri Friend's Dramatic Union."Since then, two or three plays were staged every year during Durga Puja.The actors were a mix of Bengalis and Manipuris, but the language of the plays was Bengali.It was only on September 30, 1925 that the first play in the Manipuri language called Nara Singh, written by Lairenmayum Ibungohal was staged at the palace.It marked the beginning of proscenium theatre in Manipur, performed in the Manipuri language.
Other performances which were not traditionally Manipuri also started flourishing alongside proscenium theatre on the fertile soil of Manipur.Sumang Lila (courtyard plays) and Fagi Lila (comic plays) are notable ones.The direct interpretation of Western theatre can be found in Sumang Lila with traces of jatra.According to Singh (2012) sumang lila is the "lila in which a few artists perform without much props at a courtyard or an open space surrounded by an audience.The performance is characteristic of witty dialogues and appropriate body movements, thus giving the audience the flavours of many rasas" (p.10).Shyamsunder further contends that not all the 'courtyard' performances can be called sumang lila.After the advent of Vaishnavism in Manipur, many religious plays based on the life of Shri Krishna and other episodes from the epics and Puranas were performed at the courtyard or the mandapa.These performances include Udukhol, San Senba Lila (Krishna tending the herd), Goura Lila, etc. Technically speaking, all the performances come under the 'courtyard' plays.However, to clear the definitional confusion, the Manipur State Kala Akademy, in its General Body meeting held on 15 January 1976, called the religious plays jatra, and the non-religious one was named sumang lila or courtyard plays (pp. 10-11).
It took considerable time for the people of Manipur to rediscover and experiment with their diverse performance traditions within the realm of an independent Manipuri theatre.Among these traditions are performances like Rasa Lila, Thang-Ta (Sword and Spear), Wari Leeba (narrative storytelling performances), Khongjom Parva (musical storytelling), and various other forms, all of which have been cultivated and refined in the cultural soil of Manipur.Most notably, there is Lai Haraoba, revered as the cornerstone of all Manipuri performances.
For ages, Lai Haraoba has been delivering the essential spectacle while simultaneously fulfilling its religious and ritual obligations.The laipung (sacred space) has hosted many great singers, performers, and dancers.This very laipung has received creative expressions and stored them for posterity.It is no wonder that many performances that constitute the Lai Haraoba are characterised by meticulously defined conventions, rites and rituals, and ceremonials that underwent changes and innovations.Having said this, the Haoba Nurabi episode is distinctly different from other constituent elements, for it exhibits a higher level of refinement and standardisation in the aspects of performance, music, makeup, costume, reception, and, most importantly, popularity.
LAI HARAOBA AND ITS ASPECTS
Social dramas are embodied in ritual, where they have paradigmatic functions that make clear the deepest values of the culture Bell (2009), p. 41.There must have been a transition from rituals to drama.Furthermore, whether the origin of performance is ritual is another debate.But many theorists share Richard Schechner's confusion as seen in his statement, "At one moment ritual seems to be the source [of performance], at another it is entertainment."As we progress in the paper, we shall also see that binary continuum efficacy/ritualentertainment/theatre is what Shechner calls 'performance ' Schechner (2005a), pp.136-140).Lai Haraoba serves as an exemplary illustration of rituals that effectively act as theatrical performances, conveying societal truths and the perspectives held by its members about those truths.At the same time, it provides 'entertainment' to a congregation of persons who are believers and, at the same time, act as spectators.Lai Haraoba is, in fact, a perfect mix of ritual and entertainment.
So, we come to the very intricate Meitei Lai Haraoba and its many facets and how vital the ritual festival is in a Manipuri's worldview and the history of performance of the state.Firstly, a glance at the definition provided by Singh (1963) Ngariyanbam Kulachandra Singh only provides the etymology of the term Lai Haraoba and its divine origin.We also know what happens in this festival as we shall see subsequently.However, the complexity of this festival is unforgiving.Many have tried to define it, but there still is a sense of lack in these attempts.The absence of a conclusive definition for Lai Haraoba arises from the rich, intricate, and multilayered connotations associated with every step and stage of the propitiating festival.
Nevertheless, one can glean insights from observing this celebration, which vividly portrays the narrative of human creation right from the turning of the vital energy into limbs, eyes, ears, etc., inside the womb, birth and inhabiting the earth, building shelters and houses, agriculture and farming, weaving, and handicraft, through songs, dances, and rituals.This festival's religious and spiritual aspect is the celebration of the creation, preservation and propagation of life and a sense of community healing.The Meiteis (the majority tribe of Manipur, also spelt Meetei) believe that such reenactments will bring peace and safeguard sustenance in the land.
Lai Haraoba is described as a propitiating festival or pleasing of the gods, both widely accepted and employed descriptions today.There are four major kinds or variations of Lai Haraoba and they are 1) Kanglei Haraoba, 2) Moirang Haraoba, 3) Chakpa Haraoba, and 4) Kakching Haraoba.The inclusion or exclusion of certain ritualistic practices and localisation of the haraoba differentiates these four different styles.However, the core objectives of the haraoba remain the same in all of them.The core objectives which provide equilibrium to the realm are 1) bigger villages and state (death and disease-free realm), 2) abundant rice and fish (bountiful harvest and other produce), and 3) long life for the king and his family (political stability).
The propitiated gods are called Umang Lais (literally meaning gods who reside at sacred groves).They are gods from the Meitei creation myth; some are ancestral deities, and some are deities who protect and watch over villages and the villagers.This pleasing festival is carried out at the laipung (sacred ground) where the lai (deity to be propitiated) resides.The laipung is an open, circular space within an umang or sacred grove.The annual festival is carried out with the active participation of villagers as performers with an audience.
From the ritual perspective, Lai Haraoba can be broadly divided into recurring and non-recurring ceremonies.Recurring rituals are those which are performed every day.Non-recurring rituals are performed only once, and they occur either on the first day of the appeasement ceremony or on the concluding day of the ceremony.The Tangkhul-Nurabi episode is only performed once on the concluding day of the festival.
LAIPUNG: THE SACRED (PERFORMANCE) SPACE
Meiteis believe that the Lai Haraoba is a gift of gods.It is us human beings who learn the dance movements of gods through imitation and replicate it on earth.Therefore, it is only natural that the performance would take place in a sacred space.This space is known as laipung, which is a conflation of two words -lai, meaning god/deity and pung, meaning mound or ground.It is at this space that the appeasement rituals are performed.Since Manipuris got Lai Haraoba from the gods, laipung, the performance space, must be the earthly model of the divine space.However, it is essential to note that this idea is not unique to Manipuri culture alone.The belief in a divine origin for both performance and the space in which it takes place is a widespread concept that transcends cultural boundaries and can be observed in various cultures around the world.To put it into context, let us see what Awasthi (2001) writes about the divine origin of theatre in India and its link with temples: In a tradition in which the drama has been taken as a gift of gods, it was but natural that theatre would find a place in the temple, the abode of gods.The temple, as a link between earth and heaven is the most appropriate place for the presentation of drama dealing with the God's lila (divine deeds) and his avatar roop (incarnation form).(p.10) The above quote explains the triangulation of God, theatre/performance, and temple/performance space.However, the Manipuri notion of the above three is more complicated than one can imagine.For a performance to take place, we need to know where the god (lai) resides.Manipuri notion of god or lai is as complex as it can get: When a lai resides in a Manipuri house, the lai becomes yumlai (yum=household+lai).The term lamlai (lam = open space + lai = god/deity) is used when the presence of a lai is perceived at or on a certain geographical space/area like a meadow or an open space.The clan god/deity is known as sageilai which is again a conflation of sagei = clan+lai = god/deity.Maikei ngakpa lai (maikei = direction+ngakpa = to protect+lai) is a tutelary or guardian deity.Other lais are believed to have resided in hills, rivers, lakes, trees, etc. Above all these lais, there are other lais associated with the Meitei creation myth.To make matters worse, Manipuris also use the term lai to denote evil spirits (malevolent) and good spirits (benevolent).Premchandra (2022), p.48 The sketch below describes how the performers use the performance space.Manipur being a ritual state, performance can occur wherever a lai's presence is felt and perceived.For example, whenever Meiteis go out on a picnic or eat in open spaces far from home, they make food offerings to lamlai (perceived deities who exercise overlordship over the space) before they eat.So, the nature of the space changes when someone adds ritualistic offerings.The space gets sanctified, and the perceived lai is appeased.This appeasement concept is what one finds in the performance of the Haoba-Nurabi at the laipung.
"Every sacred space implies a hierophany, an eruption of the sacred that results in detaching a territory from the surrounding cosmic milieu and making it qualitatively different Eliade (1959), p. 26.In this way, a laipung serves as an excellent centre of the Meitei faith and community, making it qualitatively different from other spaces where other lais are perceived.The love of dance, music, and sports by the Manipuris is expressed here at this laipung.Pena players (a traditional stringed musical instrument), bards, shamans, and other learned scholars from the kingdom's institutes contributed to the making of Lai Haraoba as we see it today.Their compositions, dance movements, rituals, and other creative expressions were added from time to time to make Lai Haraoba a complete performance unto itself.
The whole space where the performance takes place is referred to as nadayai sidayai pung, which roughly translates into the 'space free from death and disease.'The four corners of the oval-shaped laipung point towards hills where the four major deities reside: Thangching, Koupru, Marching, and Wangpuren.It happens so because the temple where the lais (deities) sit to preside over the day's proceedings always faces the East.The door of the death (ashi thong) is imagined to be slightly ajar.In contrast, the door of the living is widely opened.It is thus presumed because the central idea of the propitiating festival is for a land free from death and disease.Therefore, death is shunned, and birth is welcomed.Pakhra Khong and Lukhra Khong also bear similar interpretations from the standpoint of procreation and populating the land.Over and above all these, the propitiated deities must be accompanied by their consorts.Deities who do not have consorts are not propitiated.The reason is the associated male and female energies, which must be perceived and emulated by the devotees.
The spot occupied by the characters Konsabi and Tharainu is a symbolic depiction of a market.It is not the market that one can see in Manipur.It is instead the Leichon Keithel or Market, which the Meitei goddess of wealth, Emoinu, graced once.It is believed to be on the Northern side of the Koupru Hills.In this performance, a complete set of activities enabling a people's sustenance is depicted.It includes farming, horticulture, and the accumulation of wealth through buying and selling at the market.It also illustrates how Meitei women have been controlling the internal economy of Manipur for ages.
Haoba's comic encounter with bees and subsequent actions become pivotal in the performance.In some Lai Haraoba renditions, Haoba's bee sting is portrayed as so excruciating that it leads to comedic consecration, eliciting laughter and amusement from the audience.However, this irreverent episode within the sacred setting does not detract from the central theme of the performance, which is the celebration of procreation.The message of procreation remains paramount, reminding spectators of their roles and responsibilities as males and females.The bee's sting symbolically represents the physical encounter between the main characters, and the act of harvesting and enjoying the beehive together represents the bountiful "harvest" or growth in population.
THE HAOBA-NURABI EPISODE
Manipuri drama has come a long way.It has already marked a century of performance in the year 2002.But true drama has been part of Manipuri performance traditions in all its aspects and senses.Regarding this, one of the stalwarts of Manipuri theatre, Somorendra (2000), had this to say: In a span of almost a century which is of course a very short period, drama took a good stride in Manipur and a lot of changes have occurred in its form and content in the last quarter of the 20th century.It was because of the fertility of Manipur's culture, and specially so in the case of drama in which has been found embedded alive in the aged old religious performance of the Laiharaoba.The Tangkhul Saram Pakhang (the youth, Tangkhul Saram) and Nurabi (Maiden) episode with exchange of dialogue, songs and its conflict of claiming the ownership of the land is really a drama in action.(p.32) Arambam Somorendra's accounts shed light on the presence of drama within the culturally rich performances of Manipur, even before the introduction of proscenium theatre into the kingdom.Somorendra mentions the Haoba-Nurabi episode from the ancient ritual performance because it is much closer to modern drama with dialogues and costumes.A similar view is also expressed by Singh (2013), a critic and literary historian when he says that "The Manipuris already past masters in dance, music and later jatra, took to the new form of performance [proscenium theatre] like a duck to water" (pp.234-35).20)."In a straightforward interpretation, this phrase conveys, "you are the maiden who sleeps in the assigned maiden's chamber at your father's residence."This suggests that Panthoipi remained unmarried until she married into the Khaba clan.Nonetheless, she had already been promised to Nongpok Ningthou (Haoba) in a prior sayon or avatar.The reason behind her departure from her intended husband to unite with Haoba stems from a previous agreement with Marjing during their earlier sayon.From day one of the marriage, she pretended to be possessed and ultimately vanished from the Khaba household only to appear at the Nongmaiching to be united with Haoba.Wayenbam Lukhoi Singh elucidates why the divine lovers fail to recognise each other upon meeting at Nongmaiching.In their previous incarnation, the lovers had decided to be reborn as Haoba and Nurabi.Haoba's task was to seek out Nurabi, identifying her by the conjoined gourd she carried.Initially overlooking the gourd, Haoba instigated a quarrel, leading to subsequent events Singh (2008), pp.107-108).
The performance under discussion depicts the meeting of Nongpok Ningthou and Panthoibi disguised as Tangkhul Huitok Pakhang and Saram Nurabi while hiding from the Khaba clan.Haoba asked Nurabi to meet him at the Saramching of the Chakha Hill range.Haoba (Tangkhul) met Nurubi while she was tilling the ground for sowing rice.Seen as a ritual performance, the act begins with an invocation followed by meticulously performed rituals of tilling and sowing the proper processes of harvesting them.What makes this act unique is the involvement of trained characters who are extrinsic to everyday rituals.This act provides comic interludes to an otherwise profound ritual realisation of Lai Haraoba.According to Gourachandra (2015), it is argued that numerous ritual songs were taught to the people of Kakching and incorporated into Kakching Haraoba during the reign of Maharaja Churachand (1891Churachand ( -1941)).These songs Oukri and Khencho, which have become integral to Lai Haraoba, were originally adopted from Kanglei Haraoba (109).This observation suggests that Lai Haraoba allowed for both additions and omissions, raising the possibility that the comedic interlude provided by Haoba may have been an inspired addition.
THE PERFORMERS AND THE PERFORMANCE
The duration of the Haoba Nurabi act differs from performance to performance depending on the spectators' demand and Haoba's crowd interaction.As mentioned earlier, it is the only performance in the entire Lai Haraoba celebration where hired actors become a part of the propitiating festival until the performance is over.This enactment, often referred to as loutarol or the "language of tilling the field," portrays the promised meeting of these two divine lovers.The encounter between the two is light-hearted, incorporating elements like humour, mythology, sacred chants, and playful interactions.After this act is over, other performances will follow until the day's activities are over.
Performers/Actors: 1) Tangkhul Saba (hired actor), 2) Nurabi (she-shaman who is already part of the daily proceedings) 3) Seven helpers of Nurabi (Hired actors) 4) He-shaman (who is already part of the daily proceedings) Later Additions: 5) Meitei Lambu (an old man who is the owner of the land-hired actor) 6) Konsabi (an old woman who is a trader -hired actor) 7) Tharainu (Konsabi's helper -hired actor) The following enumerated events in the enactment must be followed strictly so that the deities are not angered.We can divide the performance into three Acts with an Invocation.One to two is the invocation.The first Act is from three to six.The second Act is from seven to nine.The final Act is from ten to twelve.
Events as they unfold: 1) Maiba (he-shaman) sings the invocation 2) Maibi (she-shaman) sings the louta eshei or loutarol as a follow on of the invocation 3) Nurabi (Maibi) sings louyan eshei and her female mates sing the chorus 4) Haoba's entry 5) Haoba pretends to shoot arrows in four different directions 6) Tangkhul and Nurabi come face to face but do not recognise one another.
Expresses their feelings through songs called khutlang eshei awai akhum 7) They fight for the land where they till 8) Meitei Lambu meddles and stops the fight 9) Their true identities are revealed, and till the land together 10) Bees bite Haoba 11) They harvest the beehive and eat it together 12) They sing louka eshei and end the performance The Haoba-Nurabi episode in Lai Haraoba begins with an invocation sung by the maiba or the he-shaman.This invocation is sung so that the audience forgives the performers if any mistake was committed while performing.These songs also ask the deities to bless the land for pest-free crops and good harvests.Long and peaceful lives for the citizens and the king's family.This paper does not include the song sung and many others for want of space.The maibi, on the other hand, continues the invocative part with another song, dedicating it to the presiding deities.The song she sings is called Loutarol, or the song of the first day at the field.The same song can be found in the old treatise called Khamlang Ereng Puwari.She sings: Enacting the Gods: The Performance of Haoba Nurabi Episode in the Lai Haraoba of Manipur Ha!The owner of the plough of the universe The cracking of the fallow earth begins Tools and implements have been brought out Let's till the land, let's say he -hou -hei -hou He -he -yiyo, he -he -he -yiyo.Beginning the song thus Like all the gods who assembled And began the tilling of the field We, those who populate this big village We, the shamans of this realm Rubbing shoulders with the rhapsodists In hordes of groups and subgroups Singing the loutarol song sung by the gods We present to you this song again Forgive us our mistakes, forgive our iniquities For you are magnanimous, for you are benevolent We seek permission from you two To till the land For the big village under your Lordship For big and high-yielding paddy stems and grains Grant us a bountiful and pest-free harvest We pray to you Lord of the Lords We pray to you Consort of the Consorts.
The exposition part of the performance begins after the invocative songs.Here, Nurabi and her fellow workers sing a song together.They act as if they were digging the earth for sowing paddy.This song is called louyan eshei, or tilling/digging song while taking a round of the laipung anti-clockwise.My father's field it is Hey yanse [Let's till/dig].My forefathers' field it is Hey yanse Till the field for a peaceful kingdom Hey yanse Till the field for the king's long life Hey yanse Till the field for a prosperous kingdom Hey yanse Till the field for bountiful crops Hey yanse Till the field for long lives Hey yanse Till it for it is the field for sougri and mayangba Hey yanse Till it for it is the field for lomba and fadigom Hey yanse Till it for it is the field for fourel and foujao Hey yanse Till it for it is the field for singkha and singthum Hey yanse Words in italics are names of Manipuri herbs, vegetables, rice, and eatable roots.Here the workers are talking about farming the land apart from cultivating rice, which is the staple for Manipuris.Towards the end of this song, the actor who acts as Haoba enters the performance space from the Northwest side of the sacred space.It is the 'rising action' in the performance.Haoba's costume is elaborately explained in Panthoipi Khongkul (2012) Haoba is played by someone who can sing, dance, do mimicry, and be good with retorts and improvisation.The entire performance rests on his antics while the other characters fulfil the ritualistic needs.He comes towards the South-West and gestures, shooting an arrow.From there, he goes towards the North-East, then to the South-East, repeating the same motion of shooting an arrow.Finally, he comes towards the centre of the sacred space and pretends to shoot an arrow towards the sky and then to earth while facing the presiding deities.This space is known as laiboula thapham or the space where the offerings are made on a banana leaf for the start of the daily ceremony.After this, Haoba disrupts Nurabi and her co-workers and argues that it is his land, not hers.A verbal fight ensues, and this fight is expressed in the form of khutlang eshei awai akhum or the workplace duet.
The khutlang eshei, sung by both Haoba and Nurabi, exhibits variations from one locale to another, owing to the inherent space for improvisation.Nevertheless, numerous individuals have standardised these songs over the years, resulting in an established format.The core theme of their performance centres around selfidentity, as they remain unaware of each other's true selves.This duet showcases their cleverness, laced with the subtle emotions of the heart and the fervent desires of youth.Eventually, the duet reaches its conclusion when Haoba affectionately addresses Nurabi as "O! Nongmai Nurabi" three times.
Meitei Lambu disrupts the fight over the tilling rights of the land, which the spectators know as a hillock in the Saramching (a hill range) even though the performance is happening at the sacred ground.He is successful in bringing reconciliation between Haoba and Nurabi.The conflict is resolved, and the 'falling action' begins.After the reconciliation, Haoba and Meitei Lambu sing a khutlang eshei awai akhum.
The 'denouement' commences as Haoba joins Nurabi and her seven colleagues in cultivating the pam (jhum).While they diligently work the soil, Haoba experiences an unexpected bee sting.Together, they track down the bee's origin and uncover a beehive.Together, they harvest the beehive and relish it.Following this brief diversion, they return to their task of tilling the jhum.When Haoba gets stung by the bees, Nurabi playfully remarks, "Let your pain subside, but the bulge should remain," thereby maintaining the central theme of their performance.
While tilling the pam, the workers will heave Ho yanse!He yallu (let us dig) and act as digging the soil.The performers make a complete round digging the field, and the performance comes to an end after the louka eshei (end of work song) is sung by the maibi.
Figure 4
Figure 4 Haoba is at the Spot Where the Beehive is Ningthoujam (2013, c) The seven co-workers form a group who demonstrate typical movements and gestures such as tilling, sowing seeds, gathering food, and harvesting.Their actions and functions are restricted to the particular activity they are assigned to.Like the Ramlila of Kashi described by Schechner (2015), Haoba-Nurabi is also a "total theatre of inclusion and immersion, theatre where event swallows the participants (p.133).Haoba-Nurabi evokes a wide range of emotions within the audience.Functioning as a fertility rite, the characters, particularly Haoba and Nurabi, engage in actions and dialogues that, by contemporary standards, may be considered provocative and indecorous.The character is conceived as an annoying and loud individual with comic fervour to provide comedic interludes to the audience while his tussle with Nurabi continues unabated.Haoba interacts with the audience, brings laughter, and blushes to the young girls and boys with his sexually explicit pantomime.The act is like the 'live communication' model of Sircar (2009) where the performers and the audience interact on multiple levels, including the spectatorto-spectator communication.
Ritual actualisation apart, true entertainment is provided by Haoba and Nurabi through their interaction, which is a display of power and love through allegorical movements and dialogues.Nurabi is acted by a maibi who is already well-versed in the rites and rituals and trained in singing and dancing.Metaphorically, the two principal characters convey to the spectators the intent of the interaction: fertility and agricultural rites.To cite an example, Haoba chases Nurabi like a stag pursues a doe during the mating season.Nurabi responds Haoba's approaches with typical cries and gestures.As previously mentioned, this performance is distinguished by its intricate costumes, extravagant movements, exaggerated gestures, witty dialogues, mimicry of sounds, exaggerated props, musical accompaniment, and comical makeup.Haoba's amusing antics and clownish acts serve as a deeper exploration of their symbolic defiance.This disruption may serve as a means to promote reconciliation in the wake of certain conflicts and divisions among the lovers.Unlike other aspects of the festival, there are no conflicting elements or opposing forces present in these rituals and actions within the broader context of Lai Haraoba.Seen as a whole, the act reads like Turner (1982) 'social drama' model, where norm-goverened social life is interrupted by the breach of relationships which leads to a state of crisis and goes through redressive means and ultimately achieves reconciliation (92).
The state of crisis created by Haoba cannot be found in any other aspect of Lai Haraoba.Haoba does a profaning of the sacred space by consecrating at the sacred ground out of pain (bee sting).However, this transgression is part of the elaborate ritual of procreation and wellness the spectators (the village and villagers) will receive from the propitiated deities.Clowning by Haoba can be seen as, "[…] uncovering of the multiple strands of sensorial information whose combination and structuration aim at creating a coherent, albeit often surprising, experience in the minds of the spectators" (10), as Bouissac (2015) puts it.However, Haoba achieves subversion of the space and its sanctity through suggestive dialogues and explicit gestures.It is as if he has been given the licence to do anything within the ritualistic codes and conventions.At times, during Lai Haraoba, event organisers may request actors to incorporate more mature content into their performances.Nevertheless, we often witness a more restrained performance that does not deviate significantly from the primary objective of the enactment.
CONCLUSION
Since people's actions are often guided by their perception of what holds significance, engaging in ritual performances can have substantial social implications.Lai Haraoba can be aptly described as "cumulative theatre", where the constituent elements were added gradually.due to its inclusive nature, involving the entire village or the area where the deities are believed to hold sway.The performers, encompassing singers and dancers, are drawn from the local community, and the propitiatory rituals cannot unfold without their active participation.As previously mentioned, it stands as one of the paramount cultural phenomena in Manipur, bearing immense religious and historical significance.The festival is rich with intricate layers of meaning and symbolism, offering a profound portrayal of diverse facets of Manipuri mythology and cosmology and how they are enacted in front of spectators year after year.
However, it is still astonishing how Lai Haraoba has withstood the test of time and obstruction in the land devastated by poverty and successive wars.On top of that, king's patronage also stopped after Hinduism made inroads in the 17 th century.Then came the infamous Seven Years of Devastation (1819-1826) brought by the Burmese invaders, which wiped out the Meitei population from the valley of Manipur and sent many from Manipur to take refuge in Assam, Tripura, and Bangladesh.Institutes which were under the king, such as Maiba Loishang (Institute of Shamans), Pena Loishang (Institute of Pena), and Pandit Loishang (Institute of Scholars) continued to act as the lifeline of the Lai Haraoba in codifying the rituals and conventions which every lai haraoba must follow.However, there has been a decline in the power and reach of these institutes after the king lost his power and democracy was adopted in an independent India.With the decline of these institutes, Lai Haraoba in Manipur has faced many ups and downs.The open spaces have been roofed and turned into mandap-like structures.Sacred groves have
Figure 3 Figure 3
Figure 3 Tangkhul youth wearing the Tangkhul dress, wrap-around above the knee and the upper garment tightened diagonally over the chest.His head is adorned with headgear with animal horns and flowers.He wears a sheath at his waist made from kurao [a tree] with strings made from vines and carries in it the Tangkhul dao called torthang.On his back, he holds a quiver filled with arrows and a bow in his hand.He also wears a long Tangkhul cloth as a carry bag across his shoulder and walks as someone who is crossing a village towards a destination.[author'sprosaic translation]
|
2024-01-26T17:23:14.905Z
|
2024-01-23T00:00:00.000
|
{
"year": 2024,
"sha1": "8fed5d30a16b84fc96896e9797bd3250ac566eb7",
"oa_license": "CCBY",
"oa_url": "https://www.granthaalayahpublication.org/Arts-Journal/ShodhKosh/article/download/725/766",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "978dee8e21ad2bcebece0392d459e69e1bd66ada",
"s2fieldsofstudy": [
"Art",
"History"
],
"extfieldsofstudy": []
}
|
240105353
|
pes2o/s2orc
|
v3-fos-license
|
Bacteriophages: The Good Side of the Viruses
Bacteriophages or phages are bacterial viruses that are known to invade bacterial cells and, in the case of the lytic phages, impair bacterial metabolism, causing them to lyse. Since the discovery of these microorganisms by Felix d’Herelle, a French-Canadian microbiologist who worked at Institut Pasteur in Paris, Bacteriophages begin to be used in the treatment of human diseases, like dysentery and staphylococcal skin disease. However, due to the controversial efficacy of phage preparations, and with the advent of antibiotics, commercial production of therapeutic phage preparations ceased in most of the Western world. Nevertheless, phages continued to be used as therapeutic agents (together with or instead of antibiotics) in Eastern Europe and in the former Soviet Union. Therefore, there is a sufficient body of data that incite the accomplishment of further studies in the field of phage therapy.
Introduction
The resistance of pathogenic bacteria to most, if not all, currently available antimicrobial agents, has become a major problem in modern medicine, especially because of the increased numbers of immunosuppressed patients. The concern that humankind is approaching the "preantibiotics" era is becoming realer day by day, and this scenario increases the demand for the development of new antibiotics that can be used to treat these life-threatening diseases to human life [1].
Before the discovery and the wide spread use of antibiotics, it was suggested that bacterial infections could be prevented and/or treated with the administration of bacteriophages. Despite the fact that the clinical studies with bacteriophages were discontinued in United States and Western Europe, phages continued to be utilized in the former Soviet Union and in Eastern Europe. The results of the studies were extensively published in non-English journals, and, therefore, were not available to the western scientific community [1]. In this book chapter, we describe the history of bacteriophage discovery, the first clinical studies with phages, the application of phages in different bacterial diseases, the reason why its usage failed to prevail in 3 Bacteriophages: The Good Side of the Viruses DOI: http://dx.doi.org /10.5772/intechopen.96019 However, the results of these studies were not published and the first reported application of phages used in the treatment of bacterial diseases happened only in 1921 in a study performed by Richard Bruynoghe and Joseph Maisin [7], who used bacteriophages to treat staphylococcal skin disease. The bacteriophages were injected into and around surgically opened lesions and it was observed a regression of the infections within 24 to 48 hours. In view of these promising results, several companies began commercial production of phages against various bacterial pathogens [1].
Marketing of phages
D'Herelle's commercial laboratory in Paris produced five phage preparations against various bacterial infections: Bacte-coli-phage, Bacte-rhinophage, Bacteintesti-phage, Bacte-pyo-phage, Bacte-staphy-phage, and they were marketed by what later would become the large French company L'Oreal [5]. The production of therapeutic phages also began in the United States at that time. In the 1940s, the Eli Lilly Company (Indianopolis, Ind.) produced seven phages for human use against staphylococci, streptococci, Escherichia coli, and other bacterial pathogens, which consisted of phage-lysed, bacteriologically sterile broth cultures of the targeted bacteria (e.g., Colo-lysate, Ento-lysate, Neiso-lysate, and Staphylo-lysate) and the same preparations in a water-soluble jelly base (e.g., Colo-iel, Ento-iel, and Staphylo-jel). They were used to treat various infections, including abscesses, suppurating wounds, vaginitis, acute and chronic infections of the upper respiratory tract and mastoid infections. However, due to its controversial efficacy, and with the advent of antibiotics, commercial production of therapeutic phages ended in most of the Western World [8,9]. Even so, phages continued to be used therapeutically (together with or instead of antibiotics) in Eastern Europe and in the former Soviet Union.
The institute, during its best times, employed approximately 1,200 researchers and support personnel, resulting in a production of phages of several tons a day, against a dozen bacterial pathogens, including Staphylococci, Pseudomonas, Proteus, and many enteric pathogens [1].
The bacteriophage laboratory of the Institute then began to produce phages for the treatment of many diseases, such as septicemia, furunculosis, and pulmonary and urinary tract infections and for the prophylaxis or treatment of postoperative and posttraumatic infections. In most of the cases, the phages were used against multi-drug resistant bacteria that were refractory to the conventional treatment with the majority of the antibiotics used in the clinical setting [10][11][12][13][14][15][16].
Experimental studies in animals
The first experimental studies that utilized animals in laboratories on the treatment of bacterial diseases using bacteriophages came from the Laboratory of William Smith and Smith and his colleagues [17][18][19][20] at the Institute for Animal Disease Research in Houghton, Cambrigeshire, Great Britain. In one of their first published papers, the authors reported the successful use of phages to treat E. coli in vitro infections in mice. In the next studies, [18][19][20] the authors found that a single dose of specific E coli phage reduced, by many orders of magnitude, the number of targeted bacteria in the digestive tract of calves, lambs, and piglets previously infected with a strain of E. coli that caused diarrhea. The treatment also ceased the associated fluid loss, and all the animals that were treated with the bacteriophages survived the bacterial infection. Furthermore, such positive results rekindled the interest in phage therapy in the West World and stimulated other researchers to investigate the possibility of using phages on the treatment of bacterial diseases caused by antibiotic resistant bacteria capable of causing human infections.
Another in vivo study performed by Soothill et al. [21] reported the importance of the phages in preventing and treating diseases induced experimentally in mice and guinea pigs infected with Pseudomonas aeruginosa and Acinetobacter, suggesting that its usage might be efficacious in preventing infections of skin grafts used to treat burn patients. However, it is uncertain if these "preclinical" studies preceded human clinical trials. Indeed, although many human trials were preceded by at least some in vitro studies using laboratory animals, the scientific literature regarding this topic is scarce.
Since the history of the discovery of the bacteriophages and some pioneer studies regarding this subject was already explored, the next section of this book chapter will explore the lytic and lysogenic cycles of phages, mode of action of these microrganisms when used in the therapy to treat bacterial diseases as well as some specific advantages and disadvantages in such use in the clinical settings.
Lytic and lysogenic life cycles of phages
Recent publications have provided interesting evidence that questions the notion that viruses are non-living organisms [22]. Erez et al., in their recent publication, identified a communication between viruses. They found a unique small-molecule communication system that controls lysis-lysogeny life cycles in a temperate phage [23]. Another study described the assembly of a nucleus-like structure during the viral replication of phage 201Φ2-1 in Pseudomonas chlororaphis, which suggested that phages have evolved a specialized structure to compartmentalize viral replication [24].
Phages can go through two different life cycles: the lytic and the lysogenic cycle. First, phages bind to the bacterial host specifically on a receptor found on the bacteria's surface and then injects its genetic material into the cell. The phage then takes advantage of the bacterium's biochemical machinery and replicate its genetic material, producing progeny phage. Subsequently, the phage synthesizes proteins such as endolysin and holin, which lyse the host cell from within. Holins are small proteins that accumulate in the cytoplasmic membrane of the host, allowing endolysin to degrade peptidoglycan and the progeny phage to escape the bacterial host. In the external environment, lytic phage can infect and destroy all bacteria nearby its initial bacterial host (Figure 1). The rapid proliferation and the large number of lytic phages are advantageous when they have therapeutic purposes. However, lytic phages have narrow host ranges and infect only specific bacterial species. Though, it can be overcame by giving a cocktail of different phages to patients afflicted by bacterial infections [25].
In the lysogenic cycle, the temperate phages do not immediately lyse the host cell, instead, they insert their genome into the bacterial chromosome at specific sites. This phage DNA now inserted into the host genome is called prophage, while the host cell containing the prophage is called a lysogen. The prophage then replicates along with the bacterial genome, establishing a stable relationship between them. The disadvantage of using temperate phage in phage therapy is that once the phage DNA is inserted into the bacterial genome, it can remain dormant or even alter the phenotype of the host [25].
Another advantage of using temperate phages in phage therapy is that the lysogenic cycle can continue indefinitely unless the bacteria are exposed to stress or adverse conditions. The signals that triggers such event vary from phage to phage, but prophage are commonly induced when bacterial stress responses are activated due to antibiotic treatment, oxidative stress, or DNA damage [26]. Once the lysogenic cycle finishes, expression of phage DNA starts and lytic cycle begins. In recent studies, it was found that phages that infect Bacillus species depends on small molecules called "arbitrium" to communicate to each other and make lysis-lysogeny decisions [23].
The biological implication of this phenomenon is very significant and explains why when phages encounters a large numbers of bacteria colonies, therefore, finding plenty of hosts to infect, they activate the lytic cycle. If host numbers is limited, the progeny phage then activates the lysogenic cycle and enters in a dormancy state. These recent findings stimulate other researches to be done to determine if there are other peptides also implicated in this phenomenon or if cross-talk is evident among different bacteriophage [25].
Furthermore, recent study regarding the full genetic sequence of the T4 phage (GenBank accession number AF158101) showed that the lysis of the bacteria by a lytic phage involves a complex process consisting several structural and regulatory genes. Besides, it is also possible that some therapeutic bacteriophages have some unique and unidentified genes or mechanisms responsible for effectively lysing their targeted bacteria. This led scientists to identify and clone, years later, an anti-Salmonella phage possessing a potent lethal activity against Salmonella enterica serovar Typhimurium host strains. Another study showed an unique mechanism for protecting phage DNA from the restriction-modification defenses of an S. aureus host strain. Further studies are necessary to gather information that are going to be useful to genetically engineer therapeutic phage preparations [27].
Mode of action of the bacteriophages
The first studies regarding the pharmacokinetics of bacteriophages showed that phages got into the bloodstream of laboratory animals after a single oral dose within 2 to 4 hours and that they were found in the following organs of the human body: liver, spleen, kidney, etc. in approximately 10 hours. Additionally, data concerning the period of time that the phages can remain in the human body indicate that it can happen for a long period of time, i. e., for up to several hours [28].
Despite the efforts in better understanding the pharmacokinetics of phages, their self-replication creates a complex scenario influenced by both decrease and proliferation. Although in vivo amplification of phages has been already performed, the topics are dominated by mathematical models of in vitro infections, which does not necessarily corresponds to in vivo amplification [29]. On the other side of it, phage lytic enzymes are considered as standard drugs in terms of pharmacokinetics. SAL200, a S. aureus-specific endolysin, has a t 1/2 between 0.04 and 0.38 hours after intravenous administration in healthy volunteers. The authors stated that, based on the molecular weight, renal clearance and drug distribution from the intravascular to the extravascular space should be minimal. Therefore, the presence of plasma proteases can explain the decay of this endolysin [30]. Other endolysins have a longer half-life (e.g., CF-302 has a half-life of 11.3 hours, while P128 has a half-life of 5.2 and 5.6 hours for the highet doses, 30 and 60 mg/kg, respectively) [31,32]. Thus, as lytic enzymes in pre-clinical analyses shows an easier determination of its dosing regimen when compared to dosing regimen of phages, lytic enzymes are currently preferred to be used on patients [33]. In this sense, further studies are needed to better evaluate the pharmacological data concerning the lytic phages, including full-scale toxicological researches, before they can be used therapeutically in the West World [1].
Safety in the usage of phage preparations
From a clinical perspective, phages are apparently harmless. During the long period of usage of the phages as therapeutic agents in Eastern Europe and in the former Soviet Union (and before the antibiotic era, in the United States), phages have been administered to humans (i) orally, in tablet or liquid formulations (10 5 and 10 11 PFU/dose) (ii) rectally (iii), locally (skin, eye, ear, nasal mucosa, etc.), in tampons, rinses and creams, (iv) aerosols or intrapleural injections, and (v) via intravenous access, though less frequently than the first four cited methods, and there are no reports of serious complications associated with their use [1].
Another aspect regarding safety of the bacteriophages usage is that they are extremely common in the environment (e. g., nonpolluted water has been reported to contain ca. 2x10 8 bacteriophage per ml) [34] and are usually consumed in foods, highlighting their potential to be used as bioremediation agents on polluted environments. However, it would be prudent to ensure the safety of these microrganisms before using them as therapeutic agents, making sure, for example that: (i) they do not carry out generalized transduction and (ii) have genetic sequences possessing considerable homology with some genes related to antibiotic resistance, genes for phage-encoded toxins, and genes for other bacterial virulence factors [1].
Advantages in the use of bacteriophage therapy
Bacteriophage therapy presents many advantages such as high host specificity, preventing damage to normal intestinal flora, thus not infecting eukaryotic cells, low DOI: http://dx.doi.org/10.5772/intechopen.96019 dosages required for the treatment, rapid proliferation inside the host bacteria, making them ideal candidates to treat bacterial infections [35]. Unlike antibiotics, another advantage in the usage of bacteriophages is that they reinfect the bacteria host and mutate alongside them [36].
However, high specifity of the phages can be both advantageous and a limiting factor. To use a monophage therapy it is necessary to check the efficacy of the phage by performing in vitro assays against the disease-causing bacteria before applying it in the patient, which can be a laborious task to do. The solution to this problem would be to use phage cocktails, which comprises a wide range of phages acting against different bacterial species or strains [37]. According to experts all around the world, an ideal phage cocktail consists of phages belonging to different families or groups so that it would target a broad range of hosts. Also, they would have to possess a high absorption ability to the highly conserved cell wall structures of the bacterial hosts. Additionally, the usage of phage cocktails may reduce the emergence of phage resistant bacterial population. On the other side, other researchers defend the sequential use of individual active phages to the patient, though, in clinical practice, it appears to be a difficult strategy to perform [38].
Not only bacteriophages per se can be used to treat bacterial infections. Their by-products can also do the trick. It was already reported that lytic enzymes showing function similar to lysozyme can also be used as an antibacterial agent or can be used in synergy with other antimicrobials like antibiotics to improve the efficacy of the treatment [39]. A phage derived protein, "endolysin", also possesses antibacterial and antibiofilm activity against ESKAPE pathogens [39][40][41][42][43]. V12CBD, a recombinant protein derived from bacteriophage lysine, PlyV12, was also able to attenuate virulence of S. aureus and also enhance its phagocytosis in mice [44].
Disadvantages in the usage of bacteriophage therapy
It is widely known that phages can be vector for horizontal gene transfer in bacteria, and in this process, bacteria can exchange virulence or antibiotic resistance gene, making these microrganisms resistant to a wide range of antibiotics [45]. Therefore, phages cannot harbor virulence factors or antibiotic resistance genes like integrases, sitespecific recombinases, and repressor of the lytic cycle that may accelerate the integration of these genes in the bacterial hosts. Algorithms that can predict the mode of action of the phages as well as their virulent traits are available but their database needs to be constantly updated with a greater amount of genome sequence of phages [46].
Recent studies demonstrated an in vivo efficacy of phages against infections caused by ESKAPE pathogens (Enterococcus faecium, Staphylococcus aureus, Klebsiella pneumoniae, Acinetobacter baumanni, Pseudomonas aeruginosa and Enterobacter spp.), and their authors used fully characterized phages that showed no virulence factors or antibiotic resistance genes, therefore, they were considered safe as they do not provoked any allergic or immune response in the patient, and they were also stable at varied pH and temperature, making them ideal candidates for bacteriophage therapy [47][48][49][50].
Another limitation is the relatively weak stability of phages and their proper administration in order to reach the site of action. Phage preparations can be applied orally, nasally or topically [51,52]. To overcome this limitation, studies were conducted and they have shown that phage's efficacy is improved when they are entrapped with liposomes [51,[53][54][55]. They can also reach the infection site in the form of a powdered formulation [56].
Future perspectives on phage therapy
There is an increasing urge to restock our ammunition of antimicrobials to combat the ever rising drug resistant bacterial pathogens. Effective antibiotic combinations are scarce and to add to the problem, the incoming of new drugs is also very low and happens in a very slow pace. Phages are a promising source of new antimicrobial drugs and they have been sparking up an interest on researchers all over the world, but still, their use is not approved on the United States and in Europe. But once limitations on their use is overcame, like preventing the phages to insert genes on their bacterial hosts that could confer them resistance to antibiotics and also the production of toxins, for example, the use of bacteriophages to treat bacterial diseases will be extremely helpful to treat patients affected by these bacterial diseases.
|
2021-10-29T15:16:02.009Z
|
2021-10-19T00:00:00.000
|
{
"year": 2021,
"sha1": "58e69ebdbde8b8fd6cdfbb91870fd64b0e1906f6",
"oa_license": "CCBY",
"oa_url": "https://www.intechopen.com/citation-pdf-url/75632",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "453dcce35acfe88f53ca3fd1939e94720d630dd0",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": []
}
|
236229546
|
pes2o/s2orc
|
v3-fos-license
|
Is Chain Affiliation a Strategic Asset or Constraint in Emerging Economies? Competitive Strategies and Performance in the Russian Hotel Industry
The purpose of the paper is to find out how chain affiliation, an important strategic choice and a key determinant for hotel performance, influences the strategy-performance nexus in the emerging economy context. It combines the resource- and institution-based views to investigate how chain affiliation moderates the relationship between competitive strategy and performance. This is done by distinguishing between market-based strategies based on differentiation or cost leadership, and non-market strategies that manifest in institutional advantage. The empirical analysis draws from original survey data on 162 hotels located in the Russian cities Moscow and St. Petersburg. The paper finds that first, in emerging economies the competitive edge of chain-affiliated hotels largely arises from market-based advantage, but only in terms of cost advantage. The finding that differentiation advantage is not an important factor of performance for chain-affiliated hotels suggests that firm-level resources derived from chain affiliation would not transform into competitive advantage in the emerging economy context. Second, it is found that institutional advantage is more important determinant of performance for independent hotels, demonstrating the importance of local knowledge and relationships for firms in emerging economies. This finding also suggests that chain affiliation might be rather a strategic constraint than asset for creating superior performance via institutional advantages in the emerging economy context.
Introduction
"What determines firm strategy and performance" is a fundamental question in strategic management and international business alike (Peng 2004). To answer this question, existing literature has often focused on the characteristics of the firm, in particular its resources and capabilities (Barney 1991;Barney et al. 2011). This resource-based view (RBV) initially maintains that the firm competes in the market either by offering differentiated products, or by attaining low cost position relative to its rivals (Conner 1991), and that the competitive advantage it gets is dependent on firm resources and its ability to deploy them efficiently.
The RBV was introduced as a firm-centered approach, implicitly assuming that the firm operates in an environment where market-supporting institutions are in place. This is understandable, taken that where institutions are strong in developed economies, their role maybe almost invisible (Meyer et al. 2009). In contrast, when markets malfunction, as in some emerging economies, the importance of institutions for firm strategies becomes evident (Ingram and Silverman 2002). Consequently, the RBV has been extended since its introduction to take into account the contextuality of resources. Researchers have argued that firm resources developed to fit certain institutional environment, would not be as applicable in a different institutional framework (Brouthers et al. 2008). Hence, resources and capabilities of firms from developed economies, where institutions are capable of supporting marketbased business activities, would not be effectively applicable in emerging economies with institutional voids (Khanna and Palepu 2000). Consequently, a direct transfer of strategies and business models from developed to emerging markets is often not possible. This is due not only to institutional voids, but also due to market characteristics such as consumer preferences and market behavior (Khanna and Palepu 2010;London and Hart 2004).
The notion of contextuality of resources has inspired an alternative approach to firm strategy: The institution-based view that suggests that competitive advantage can arise also from non-market resources , referred to as institutional advantage. Li and Zhou (2010) conceptualize such advantage as consisting of both tangible benefits such as access to government-controlled resources, and intangible benefits such as political support and goodwill. Such resources can be accessed through pursuing a non-market strategy by, for example, establishing close ties with political decision-makers (Guo et al. 2014;Li and Zhang 2007;Peng and Luo 2000).
The importance of institutions as determinant of firm strategy and performance is particularly great in emerging economies (Wright et al. 2005), and there is a mounting body of empirical research on non-market strategies and firm performance in these economies, predominantly China (Fan et al. 2013). Most studies explicitly focus on non-market strategies, whereas others link non-market and market-based components of firm strategy (Li and Zhou 2010). Existing research has mainly focused on firm characteristics such as size, ownership or age, or industry influence in terms of service versus manufacturing firms as moderators of the strategy-performance nexus (Fan et al. 2013). Studies that would investigate the implications of institutions and resources on firm performance in emerging economies at industry 1 3 Is Chain Affiliation a Strategic Asset or Constraint in Emerging… level have started to emerge only recently (Jiang et al. 2018;Tang et al. 2019). Yet, strategy and performance are very much industry-specific constructs, as the RBV inherently looks at the firm resources and their effect on strategies and performance relative to other firms in the same industry (Acquaah and Chi 2007;Mauri and Michaels 1998). This paper contributes to the debate on firm strategy as determinant of firm performance in emerging economies by providing an industry-level analysis.
In this paper we empirically analyze how chain affiliation, an important strategic choice and one of the key industry-specific determinants for hotel performance (Menicucci 2018), influences the strategy-performance nexus in the Russian emerging hospitality industry. We build on Ingram and Baum's (1997) argument that chain affiliation may be a strategic asset for hotels as it provides operating knowledge and economies of scale, but also a strategic constraint as fitting into the global strategy designed for the chain reduces the degrees of freedom that managers of individual hotels have to respond to their local environments. We combine resource-and institution-based views on firm strategy to make a distinction between market-based strategies, eventually leading to differentiation advantage or cost leadership as suggested by the RBV (Barney 1991), and non-market strategies that manifest in institutional advantage (Li and Zhou 2010). We suggest that chain affiliation would be an asset for hotels pursuing market-based strategies, but constrain the implementation of non-market strategies.
Our empirical analysis draws from the survey data on 162 Russian hotels located in the cities of Moscow and St. Petersburg and their suburbs. Our results suggest that first, in Russia, the competitive edge of chain-affiliated hotels largely arises from market-based advantage, but only in terms of cost advantage. Our finding that differentiation advantage is not an important factor of performance for chain-affiliated hotels indicates that firm-level resources derived from chain affiliation are not necessarily transferable into competitive advantage in the emerging economy context. Finally, we found that institutional advantage is more important determinant of performance for independent hotels, which demonstrates the importance of local knowledge and relationships for firms in emerging economies.
The article is structured as follows. We first present the theoretical framing of our study and construct our hypotheses. Then we describe our empirical methodology, including the data and methods of analysis, after which we present the results of the empirical analysis. We finish the paper with discussion of the results, including the limitations of our study and suggestions for future research.
Theory and Hypotheses
The theoretical framing of our study builds on two core concepts of strategic management: competitive advantage and performance. Strategic management theories have traditionally treated competitive advantage and superior performance as interchangeable constructs (Ma 2000), but there have been repeated attempts to detangle them conceptually (e.g., Ma 2000;Newbert 2008). In this paper, we treat competitive advantage and performance as two different constructs, and investigate their 1 3 mutual relationship moderated by firm strategic choice, i.e., the hotel's decision to affiliate to a chain.
We investigate competitive advantage and its relationship to performance by integrating the RBV that builds on market-based sources of competitive advantage, and the institutional approach on business strategy that view non-market resources as sources for competitive advantage.
The RBV (Barney 1991) looks to the internal resources of the firm for the explanation of its performance relative to other firms in the same industry (Acquaah and Chi 2007). Hence, to gain an advantage over its competitors, the firm needs to possess firm-specific resources superior to its competitors, and be able deploy them efficiently. According to Barney (1991, p. 101), firm resources include "all assets, capabilities, organizational processes, information and knowledge, etc. controlled by a firm that enable the firm to implement strategies." These resources may be either tangible or intangible, or a combination of both. The understanding of the RBV on firm competitive advantage as based on a unique value creating strategy (Barney 1991, p. 102) echoes Michael Porter's (1980) classic definition of competitive advantage as resulting from the firm's ability to create for its buyers value that exceeds the firm' cost of creating it (Porter 1985). The two generic strategies to create superior value are to offer lower prices than competitors for equivalent benefits, or to provide unique benefits that more than offset a higher price, leading to competitive advantage in terms of cost leadership or differentiation, respectively (Porter 1985). In this paper we conceptualize these two forms of competitive advantage through the lens of the RBV, viewing them as resulting from the ownership and deployment of firm resources to implement either low-cost or differentiation strategy.
The RBV-or theory (Barney et al. 2011)-has been extended since its introduction to take into account the contextuality of resources. Researchers have argued that firm resources developed to fit certain institutional environment, would not be as applicable in a different institutional environment (Brouthers et al. 2008). Hence, resources and capabilities of firms from developed economies, where institutions are capable of supporting market-based business activities, would not be effectively applicable in emerging economies with institutional voids (Khanna and Palepu 2000).
The notion of contextuality of resources links to the institution-based view on business strategy that suggests that competitive advantage can also arise from nonmarket resources , referred to as institutional advantage. Li and Zhou (2010) conceptualize such advantage as consisting of both tangible benefits such as access to government-controlled resources, and intangible benefits such as political support and goodwill. Such resources can be accessed through managerial political ties (Guo et al. 2014;Li and Zhang 2007;Peng and Luo 2000).
In this paper, we address the sources of competitive advantage and firm performance in the Russian hotel industry through the lens of chain affiliation as a strategic choice of the hotel firm. Hotel chain is an organizational form characteristic to the hospitality industry. Ingram and Baum (1997, p. 68) define hotel chains as "collections of service organizations, doing substantially the same thing that are linked together into a larger organization". The chain typically consists of component hotels, and centralized units responsible for functions such as distribution or marketing (Ingram and Baum 1997). Hence, the potential benefits that chain affiliation offers to its members include access to superior industryspecific resources and capabilities, embodied in the centralized functions and in managerial practices.
Correspondingly, hospitality research considers chain affiliation as one of the key determinants that explain hotel performance (Sainaghi 2010). Most of existing research has identified a positive relationship between chain affiliation and hotel performance, as chain affiliation may contribute to survival of hotels (Ingram and Baum 1997) or lead to superior financial performance (e.g., Chung and Kalnins 2001;Menicucci 2018;Mitsuhashi and Yamaga 2006). Nevertheless, this research has paid little explicit attention to the mechanisms through which chain affiliation may improve performance. In this paper, we investigate the chain affiliation-performance relationship through the lens of competitive advantage. In particular, we maintain that from the RBV, chain affiliation provides the hotel firm with resources and capabilities to build market-based forms of competitive advantage through differentiation and cost leadership.
The basis advantage of hotel chains over independent hotels is their ability to form identifiable image, standardized hospitality product and guaranteed service quality through transfer of knowledge and best practices (Ingram and Baum 1997). Moreover, the possibility to use the brand of the chain in marketing is an incentive for independent hotels to join the chain (Dahlstrom et al. 2009). Well-established brands are intangible assets that serve as a source of strategic advantage and contribute to financial performance through higher margins (O'Neill and Mattila 2006). In the context of emerging economies such as Russia, where the industry standards in terms of, for example service quality or branding are underdeveloped (Karhunen 2008;Sheresheva et al. 2016), we hypothesize that the knowledge and resources accessible through chain affiliation would help hotels to develop a superior service product, and thus serve as a source for differentiation advantage.
Hypothesis 1: Differentiation advantage is a more important factor of performance for chain affiliated hotels than for independent ones.
Moreover, we maintain that chain affiliation would serve as a source for cost advantage for the hotel firm. This is because the member hotels benefit from the chain's knowledge on the effective organization of business processes such as human resource management, and can save costs through economies of scale (Ingram and Baum 1997) through the use of centralized supply, marketing and information systems of the chain (Mitsuhashi and Yamaga 2006). Furthermore, the chain membership offers the hotel the possibility to use an existing, well established brand in marketing, which is a more cost-effective way than launching and promoting one's own brand on the market (Dahlstrom et al. 2009;O'Neill and Mattila 2006;Sheresheva et al. 2016). In sum, we maintain that chain affiliation provides the hotel firm with resources and capabilities to build competitive advantage via cost leadership, and make the following hypothesis: Hypothesis 2: Cost advantage is a more important factor of performance for chain-affiliated hotels than for independent ones.
Our first two hypotheses focused on industry-specific knowledge and resources as source of market-based advantage. At the same time, we maintain that local knowledge and non-market resources are important sources for competitive advantage in emerging economies, and apply the institution-based view on business strategy to formulate our final hypotheses.
In emerging economies, relations to institutional constituents are an important part of business strategy (Peng and Luo 2000). Such relations help coping with excessive red tape and bureaucracy that are characteristic to operating environments in emerging economies (Karhunen et al. 2018). The ability to effectively comply with regulatory requirements is particularly important for the hotel sector, where the nature of operations involving accommodation of individuals and selling of alcoholic beverages require numerous licenses and permits (Sharma and Christie 2010). Therefore, we argue that in addition to market-based differentiation and cost advantages, institutional advantage (Li and Zhou 2010) would be an important determinant of performance in the Russian hotel industry.
We support this argument by the research evidence on the positive performance implications of non-market strategies (Guo et al. 2014;Li and Zhang 2007;Peng and Luo 2000). Researchers have identified institutional support (Guo et al. 2014) or resource acquisition (Wang et al. 2013) as mediators of political ties-performance relationship. Further, Tang et al. (2019) showed that privately owned firms may use managerial ties (including political ones) to improve their competitive position visá-vis governmental and foreign-owned firms that are considered to have superior resources. In this study, we maintain that institutional advantage is a particularly important determinant of performance for independent hotels, which do not have the market-based advantages of chain affiliation. In addition, affiliation to a foreign chain with strict governance policies may constrain the hotel management's ability to establish and maintain ties with institutional constituents in an environment such as Russia, where the line between relationship management and corruption is easily crossed (see, e.g., Karhunen et al. 2018). Moreover, scholarship on political ties and firm performance has reached a consensus that political ties constitute a doubleedged sword with respect to firm performance, i.e., political ties have the potential to improve performance, but also run the risk of eroding performance . Hence, the firm needs to be able to evaluate the value of political ties, and know how to deploy them a as resource. Our third hypothesis thus reads as follows: Hypothesis 3: Institutional advantage is a more important factor of performance for independent hotels than for chain affiliated ones.
3
Is Chain Affiliation a Strategic Asset or Constraint in Emerging…
Sample
To test our hypotheses, we conducted a survey to hotel enterprises located in the two largest cities of Russian Federation, the capital Moscow and St. Petersburg. Russia is a rather typical emerging economy and identified as such by all major investment classification sources (Marquis and Raynard 2015). When considering economic, legal, social and governance aspects, Russia represents a rather classical case of an emerging economy (see Shleifer and Treisman 2005). Although business regulation in Russia has become less complex in recent years (World Bank 2021), problems characteristic to emerging economies such as corruption associated with the enforcement of regulation still persist (Transparency International 2021). The cities of Moscow and St. Petersburg are business, scientific and cultural centers of Russia, which makes them attractive both for leisure and business tourism. At the same time, they are among the most challenging institutional environments within Russia for doing business (World Bank 2012).
We started our empirical study by designing an English-language version of the survey questionnaire and then asking an independent translator to translate it into Russian and then back into English. To ensure the content and face validity of the survey measures, we mainly relied on existing scales in formulating the questions. Next, we finalized the questionnaire with a Russian research agency that performed the data collection and piloted the questionnaire with 20 hotel managers. The piloting revealed that the respondents had understood very well the survey items and that range of responses for most items was reasonably diverse.
We identified the hotel population for the survey by using two major hotel booking sites, Trivago and Hotels.com. We selected these two sites as they provide the most comprehensive selection of hotels in Russia, and also give such information on the properties that we needed to construct our sample.
At the first step, we searched for all properties in the category of "hotels", resulting in total 701 entries (398 in Moscow and 303 in St. Petersburg). At the second step, we excluded properties that (1) classify as mini-hotels (having 15 or less rooms) as they are subject to their own legislation, (2) provide only bed and breakfast, (3) have no functioning website, or (4) are apartments although booking sites classify them as hotels. This resulted in excluding 298 properties (129 In Moscow and 169 in St. Petersburg). Hence, the number of hotels meeting the criteria of our study was 403 properties, which formed our target population and thus the sample.
In each hotel, a senior manager served as the key informant, as these managers were expected to be familiar with their hotel's competitive strategy and performance. We subcontracted the Russian research agency with its trained interviewers to conduct the survey via personal contact, which is the research strategy that will most likely generate valid information in emerging economies (Li and Zhou 2010). The survey was implemented in November 2014-January 2015. Potential respondents were first contacted via telephone to invite them to participate in the research, 1 3 resulting in 162 senior managers agreeing to participate. They were then interviewed onsite, which means a response rate of 40%.
The majority of the respondent hotels represent the mid-range segment, i.e., 3-star (49%) and 4-star (36%) properties. The size of the hotels varies from 16 to 930 rooms, the average number of rooms being 133. Majority are independent hotels, as only a third belongs to a hotel chain. About 80% of the hotels are managed by the property owner. The hotel properties are in most cases in private Russian ownership, but a third of them have at least some municipal or state ownership and 17% at least some foreign ownership. 54 hotels (33%) are located in St. Petersburg and Leningrad region while 108 hotels (67%)-in Moscow and Moscow region.
Dependent Variable
Our dependent variable is Revenues per Available Room (RevPAR), which is the most commonly used productivity measure in the hotel industry (Brown and Dev 1999) and frequently applied in academic research on hotel performance (Sainaghi 2010). In this study, we apply it as a subjective variable measuring the respondent's perception on its Revenues per Available Room compared to its competitors on an ordinal scale of 1-7 (from "much below" to "much above"). Managers' perceptions of performance is an appropriate measure of actual performance, as several studies have affirmed a high and statistically significant relationship between perceived and actual measures of performance (recently e.g., Day et al. 2015). Similar measure of performance has been also used in a recent study of Wilke et al. (2019).
The decision to use a perceptual financial measure instead of objective measures is further justified by our research context. The lack of transparency and financial misreporting are common features of firms in emerging economies, particularly Russia and China (Li et al. 2014). Therefore, the quality of accounting-based financial indicators is often questionable. This problem also relates to the complex governance structures of the Russian industries, where independent businesses are often organized into business groups or holdings for taxation purposes (see, e.g., Ledyaeva et al. 2015). This makes it challenging to attain hotel-level accounting data.
Explanatory Variables
We consider three dimensions of competitive advantage-differentiation advantage, cost advantage and institutional advantage, adapted from Li and Zhou (2010). We measured them by first components of factor analysis of respective seven-point items 1 as reported in Table 1. Table 1 Items used for computation of explanatory variables and their descriptive statistics Each respondent was asked to comment on the statements on competitive position of the hotel in the past three years using 7-point scales from 1 = completely disagree to 7 = completely agree Variable Mean Std. dev.
Differentiation advantage
We take great efforts in building a strong brand name-nobody can easily copy that As can be seen from the table, the measures of normality, skewness and kurtosis suggest that the items' distributions do not depart substantially from normal distribution.
Control Variables
To account for extraneous variables that might influence a firm's performance, we included standard hotel performance factors used in hospitality research (Sainaghi 2010 Province dummy equals to one if a hotel is located in a suburb/region of St. Petersburg or Moscow. City center dummy equals to one if a hotel is situated in the down town, i.e., within 1.5 km radius from the city center. Second block of dummies control for management related issues. Management contract dummy controls the difference between contract-managed and owner-managed hotels (the latter being the reference group). Private ownership dummy equals to one if state ownership is less than 50% (reference being state owned hotels with state ownership higher than 50%). Finally, Foreign general manager dummy equals to one if the hotel's general manager does not have Russian citizenship.
At last, Dummy for hotel's chain affiliation (equals to one if a hotel is part of a chain and zero otherwise) is introduced to test hypotheses 1-3.
All our data is retrieved from the original survey data collected for the study. A general concern in survey based empirical research is the so called common method bias (CMB) which arises from "variance that is attributable to the measurement method rather than to the constructs the measures represent" (Podsakoff et al. 2003), which can lead to either Type I and Type II errors in statistical deductions. While there is a debate on the seriousness of the problem, concerning, for example, the superiority of other-reports to self-reports (Conway and Lance 2010), it should be acknowledged. CMB is a potential problem in our case because all of our variables are self-reported by same respondents and there is no lag between the performance and explanatory variables. However, we also think that there are some attenuating factors at play. First, our key variable of interest: Competitive advantage, is very difficult to measure objectively (especially institutional advantage) which argues for self-reporting. Attempt to capture institutional advantage through other sources might induce measurement error more severe than the potential CMB. Secondly, even though our control variables are also self-reported, they are arguably objective in nature (such as amount of employees and rooms, start year, classification) and thus should not be affected by the methodology or respondent. Thirdly, all our explanatory variables are quite static firm characteristics. Hence, there is no clear intuition why a lag would be needed to pick up the performance effect of the explanatory variables.
Descriptive Analysis of the Variables
In Table 3 we present basic descriptive statistics of all variables included in our empirical model. Table 4 presents additional descriptive statistics of categorical variables.
The mean value of our dependent variable, Revenues per Available Room is 4.74 with a standard deviation of 0.98. Hence, on average, respondents tended to evaluate their hotels' performance being above of the performance level of their main competitors. Indeed, from Table 4 we can see that 61% of respondents have chosen numbers above four when they were asked to assess the performance of their hotels relative to competitors from one (much below) to seven (much above). The values of skewness and kurtosis of the dependent variable (− 0.35 and 2.95, respectively) allow us to assume that it has normal distribution.
The mean number of rooms is 133.5 with a range between 16 and 930. The mean number of employees is 70.6 with a range between 6 and 500. In general, these numbers suggest that most hotels tend to be relatively small. Indeed, from into the estimation model we performed their logarithmic transformation that helped to normalize them. Most hotels (130, around 81%) in the sample were established in 2000s while the oldest one was established in 1875. The star rating of the sampled hotels range from 2 to 5 with mean value of 3.5. The majority of the respondent hotels represent the mid-range segment, i.e., 3-star (49%) and 4-star (36%) properties.
In Table 5, we present the correlation matrix of explanatory and control variables included in the proposed empirical model.
We can notice that our indicators of Cost and Differentiation advantages correlate with each other rather significantly (correlation coefficient equals to 0.58). To address this issue we employed a blockwise hierarchical approach to test our hypotheses (details are provided below). Moreover, the correlation coefficient between Natural logarithm of number of rooms and Natural logarithm of number of employees equals to 0.84. Hence, we decided to remove the latter variable (as it has slightly fewer observations than the former variable) from the model. Year when the hotel started its operation (
Estimation Method
The dependent variable in this study is ordinal. It reflects the respondent's view on its Revenue per available room compared to its competitors on an ordinal scale of 1-7 (from "much below" to "much above"). Typically, the ordinal data modelling problem is motivated by the latent regression perspective, as mathematically defined in Eq. (1): where Y* is a continuous latent variable that is assumed to underlie the observed ordinal data. More specifically, Y * = � X+ ∈ and X is a vector of explanatory variables, is a vector of coefficients and ∈ is an error term. j is an ordinal response. a is a set of cutpoints of the continuous scale for Y*. In other words, Y is observed to be in category j when the latent variable falls in the jth interval.
To model ordinal dependent variable, we apply the logit transformation to the cumulative probabilities, as defined in Eq. (2): A typical model for the cumulative logits is presented in Eq. (3): where j = 1, …, c − 1; c is the total number of categories. x n are n explanatory variables; n are corresponding coefficients.
Equation (3) implies that for different j, the explanatory variables have a common effect, as reflected by the common . It can be illustrated by the following example. Suppose we have two points from the explanatory variables, X a and X b (note that X is a vector), then Equation (4) indicates that the log odds ratio is proportional to the distance between these two points. This proportionality remains constant across different categories. Due to this property, the model in Eq. (3) is often referred to a "proportional odds model". This model has been extensively studied and widely used in the literature (Agresti 2010;Greene and Hensher 2010). Thus, we also employ it in our paper.
Because our hypotheses suggest interaction terms composed of competitive advantages' indicators and Dummy for chain affiliation, we utilize a moderated regression analysis for testing these effects (Jaccard et al. 1990). As was already pointed out above, in order to count for rather high correlation between the types of competitive advantage we further employed a blockwise hierarchical approach to test our hypotheses (cf. Elvira and Cohen 2001, p. 599;McGrath 2001, p. 125). This blockwise procedure resulted in three additional models.
Is Chain Affiliation a Strategic Asset or Constraint in Emerging…
Results
In Table 6 we report ordered logit estimation results. Table 6 contains our five regression models. In Model 1 we included only the control variables. The coefficients of five controls are statistically significant. In particular, on the one hand, Natural logarithm of number of rooms, The share of outsourced employees and City center dummy is positively related to performance (p-values equal to 0.002, 0.026 and 0.076, respectively). On the other hand, Management contract dummy and Foreign general manager dummy are negatively related to performance (p-values equal to 0.002 and 0.035, respectively).
Model 2 tests Hypothesis 1 that suggests that differentiation advantage is a more important factor of performance of chain affiliated hotels than of independent ones. This model contains control variables plus Chain dummy, Differentiation advantage variable and their interaction term. The results for control variables remain virtually the same as in Model 1. None of the coefficients of the included explanatory variables is statistically significant. Hence, we do not find supportive evidence for our Hypothesis 1.
In Model 3 we test Hypothesis 2 that suggests that cost advantage is a more important factor of performance of chain-affiliated hotels than of independent ones. This model includes control variables plus Chain dummy, Cost advantage variable and their interaction term. Once again, the results for control variables remain virtually the same as in Models 1 and 2. The coefficient of the interaction term between Chain dummy and Cost advantage variable is positive and highly statistically significant (p-value = 0.008) that gives firm support for our Hypothesis 2.
Our final hypothesis is tested in Model 4. Hypothesis 3 suggests that institutional advantage is a more important factor of performance of independent hotels than of chain affiliated ones. This model includes control variables plus Chain dummy, Institutional advantage variable and their interaction term. The results for control variables remain virtually the same as in Models 1, 2 and 3. Though the coefficient of the Institutional advantage variable is positive and highly statistically significant (p-value = 0.005), the coefficient of its interaction term with Chain dummy is not statistically significant (albeit it is negative as expected). In general, these results point to the conclusion that Institutional advantage is equally important for the performance of independent and chain affiliated hotels.
Finally, Model 5 includes all the control and explanatory variables. The results for control variables remain virtually the same as in previous models. In general, the results for the explanatory variables are rather similar to the results in Models 2-4, however, the full model gives support for the Hypothesis 3. In particular, the coefficient of the interaction term between Institutional advantage variable and Chain dummy is negative and statistically significant (p-value = 0.028). This indicates that Institutional advantage is significantly less important factor for performance of chain-affiliated hotels compared to independent ones.
It should be further noted that the coefficients in ordered logit model should be interpreted in a proper way. In particular, standard interpretation of the ordered logit coefficient is that for a one-unit increase in the predictor, the response variable level is expected to change by its respective regression coefficient in the ordered log-odds scale while the other variables in the model are held constant. E.g., in Model 4 of Table 6, a one unit increase in Institutional advantage measure would result in 0.411 unit increase in the ordered log-odds of being in a higher performance category while the other variables in the model are held constant.
Discussion
Our paper intended to contribute to the debate on firm strategy as determinant of firm performance in emerging economies by providing an industry-level analysis. In doing so, it adds to strategy research in emerging economies that has paid scant attention to the fact that firms formulate their strategies and deploy resources to gain advantage over their competitors in the same industry (Acquaah and Chi 2007). It further enriches the knowledge on the performance implications of non-market strategies in emerging economies by comparing them with those of market-based ones, and linking them to different business models. Finally, our study offers a new empirical context for the strategy research on emerging economies, which is dominated by studies on Chinese enterprises (Fan et al. 2013).
Our study integrated the RBV and institutional approach on firm strategy to investigate performance implications of market-based strategies, eventually leading to differentiation advantage or cost leadership as suggested by the RBV (Barney 1991), and non-market strategies that manifest in institutional advantage (Li and Zhou 2010). We further analyzed how chain affiliation as an important strategic choice and one of the key industry-specific determinants for hotel performance, influences the strategy-performance nexus in the Russian emerging hospitality industry.
We hypothesized first, that chain affiliation would lead to superior performance through two kinds of market-based advantages: differentiation advantage and cost advantage. This would be due to the access to industry-specific resources and capabilities possessed by the chain. Interestingly, our empirical analysis suggested that the competitive edge of chain-affiliated hotels indeed arises from market-based advantage, but only in terms of cost advantage. In contrast to our expectations, we found that differentiation advantage is not an important factor of performance for chain-affiliated hotels. This indicates that chain-affiliated hotels in Russia would not be fully able to transform the benefits of chain affiliation, including the possibility to use Western brands that are generally considered as more prestigious in the emerging economy context (Huddleston et al. 2001;Manrai et al. 2001;Pham and Richards 2015), into competitive advantage.
We explain this finding by characteristics of emerging economies as an institutional context. In particular, chain membership can be a strategic disadvantage for the hotel firm, if the chain owner has developed its strategy and business concept in a different institutional environment (Brookes and Roper 2010;Ingram and Baum 1997; see also Brouthers et al. 2008 on the contextuality of resources). For example, in emerging economies such as Russia, the hospitality industry is still underdeveloped and the concept of hotel chain is not as established as in developed market economies (Karhunen 2008;Sheresheva et al. 2016). Therefore, local hotels may not be able to meet the standards of hotel chains in terms of guaranteed service quality and identifiable image (Ingram and Baum 1997) that would be the building blocks for differentiation advantage. This is linked to the poor quality of human resources in emerging economies, demonstrating in professional skills and service attitude (Andrades and Dimanche 2017;Sharma and Christie 2010). Hence, the international brand as such may not provide competitive advantage, if it is not accompanied with appropriate service level and customer experience.
Second, we hypothesized a positive relationship between institutional advantage and performance for independent hotels. Such knowledge is needed to cope with state regulation, which in emerging economies is a burdensome task for firms. This is due to excessive bureaucracy and cumbersome procedures in, for example, getting permits and licenses. Our results supported this hypothesis, further pointing to the direction that chain affiliation may be a strategic constraint in emerging economies. In particular, independent hotels may have more freedom in their policies in relation to institutional constituents, and less strict governance standards. Institutional advantage, when understood as consisting of benefits such as access to governmentcontrolled resources or political goodwill (Li and Zhou 2010), often implies a twoway exchange between firms and authorities (Karhunen et al. 2018). Hence, governance standards required by chain affiliation may constrain hotel firms' opportunities to establish close relations with authorities and thereby build institutional advantage. This finding demonstrates the general importance of managerial freedom as competitive strength of independent firms vis-á-vis chain affiliated ones (Beaver and Prince 2004;Holverson and Revaz 2006).
As all research, our study has its limitations that at the same time provide avenues for future research. We focused empirically on one country, which may limit the generalizability of our findings to other geographic contexts. Future research might analyze the strategy-hotel performance nexus in other countries. Our study also applied chain affiliation as a firm-level proxy for access to industry-specific knowledge and assets, but did not measure the costs associated with the acquisition of managerial and marketing expertise via chain affiliation. We acknowledge that accounting for these costs might eventually dilute the cost advantage of hotels, and also that access to such industry-specific expertise may be provided also by the recruitment of experienced management. Hence, future research might investigate managerial characteristics as source of hotel competitive advantage, and also consider how factors such as entrepreneurial spirit that is associated with independent hotels is related to competitive strategies and performance.
Finally, we based our theorization on the idea that standardization of the service product and business processes is a key strategic asset for firms. Future research might study how competitive advantage of chain-based businesses is shaped in today's individualistic world, where customers are looking for unique and personalized experiences. Here, it would also be beneficial to study how chain affiliation is viewed by the customers, and whether Western hotel brands are eventually perceived as superior in the emerging economy context.
|
2021-07-26T00:06:16.808Z
|
2021-06-01T00:00:00.000
|
{
"year": 2021,
"sha1": "e3968c4221d84b9d05c9441e525e87293e6a4220",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s11575-021-00445-y.pdf",
"oa_status": "HYBRID",
"pdf_src": "SpringerNature",
"pdf_hash": "48173fed565f3cf653d5503e804b9c179dc92dbc",
"s2fieldsofstudy": [
"Economics",
"Business"
],
"extfieldsofstudy": [
"Business"
]
}
|
234005705
|
pes2o/s2orc
|
v3-fos-license
|
DESPOTIC LEADERSHIP AND JOB SATISFACTION AMONG NURSES: ROLE OF EMOTIONAL EXHAUSTION
Job satisfaction is reported with chronic issues in the healthcare sector. Specifically, in the current milieu of COVID-19 pandemic, a grave attention has been divulged on the support of the healthcare system and wellbeing of paramedic staff. There is a dearth of research on contemporary leadership in the healthcare sector, particularly in developing countries. Objective of this study was to find the direct negative effect of despotic leadership on job satisfaction through emotional exhaustion among nurses based on Affective Events Theory assumptions. Data from a sample of 265 registered nurses was collected through self-administered questionnaire distribution method deployed in public hospitals using stratified random sampling technique. The data analysis results of PLS-SEM support for the assumed effect revealed that emotional exhaustion played the meditation role between despotic leadership and job satisfaction among nurses. This study advances AET theoretical shores, research knowledge, and suggests considering feasible practical implications for HR and government bodies in the public healthcare sector in developing countries.
INTRODUCTION
The healthcare sector not only provides economic expansion opportunities but also serves the basic needs of the country (Samad, Memon & Kumar, 2020). Similarly, Swayne, Duncan, and Ginter (2012) pointed out that the healthcare system is one of the crucial factors for the development and strengthening of, nation's well-being globally, and delivering health care services that meet population needs in developing countries (Mills, 2014).
Following this argument, the satisfaction of health care providers is thus found resourceful for better healthcare services (Alameddine et al., 2017). Such that, satisfied employees on average are 12% to 30% more productive and 10% lower in turnover and 25% lower in unscheduled absences as compared to the rest of the employees. This helps the organization in providing quality patient care (Tzeng, Ketefian & Redman, 2002) retain employees for long and increased job performance as well (Blaauw et al., 2013).
Similarly, the nursing profession has gone through several changes during the last decades (Kraft et al., 2017). Whereas, literature depicts job dissatisfaction is one of the chronic issues among nurses. Accordingly, the component of nurses' 'job satisfaction' in the healthcare sector, in particular, is problematic globally and acquiring importance not only in the 'developed economies' such as the USA but also in 'under-developed' economies such as Rwanda, Philippines, Ghana, Malaysia, India, and Thailand (Hamid et al., 2014;Mills, 2014;Shipley, 2015;Atefi, Abdullah & Wong, 2016;Shah et al., 2018). Specifically, for the healthcare sector, the reduced 'job satisfaction' amid nurses have shown hefty financial outcomes. For example, the annual financial loss reckoned at $4.4 million to a 300-bed hospital due to the dissatisfied employees (Kerfoot, 2015).
However, the impact of nurses' job satisfaction issues in healthcare leads to growing concerns for the 'under-developed economies' like Pakistan, where the condition is more desperate. Ironically, the healthcare sector of Pakistan is not well equipped, resourced, and established particularly, the local dispensaries and basic health units (Ariff et al., 2010). This is reflected in the reluctance of patients utilizing public facilities (Mansoor, 2013) and also affects hospital profitability. A similar concern was recently reported by Jafree (2017) that lack in quality care in existing public 'healthcare hospitals' of Pakistan, where 'nurses' were extremely discontented with their jobs (Tasneem et al., 2018).
As 'job satisfaction' continues changing over time, it is very important to assess and keep monitoring (Coomber & Barriball, 2007). In line with the argument, Francis (2016) http://www.ijmp.jor.br v. 12, n. 1, January-February 2021ISSN: 2236-269X DOI: 10.14807/ijmp.v12i1.1344 reported many negative factors are linked with low levels of 'job satisfaction' within the healthcare field that are encountered by registered nurses in their day-to-day work. Recently, AMN Healthcare (2017) surveys have reported registered nurses to have mixed feelings regarding job satisfaction and were worried about their choice of career, as nursing has deteriorated them physically and mentally, which needs findings to explore the key culprits of job dissatisfaction among nurses.
INDEPENDENT JOURNAL OF MANAGEMENT & PRODUCTION (IJM&P)
According to AMN Healthcare (2017) reveals that 82% of registered nurses reported that leadership is indeed the call of the time in terms of quantity and quality. Subsequently, this notion is firmly associated, in a survey (HR in ASIA, 2016) by Chook, for employee job satisfaction that is directly affected by the behavior of their leaders in the workplace. Importantly, leaders have the power to change the perceptions of followers (Piccolo & Colquitt, 2006) through their behavior. Whereas, literature in this regard depicts an unseen and ignored negative effects of leadership (De Hoogh & Den Hartog, 2008) particularly, despotic leadership on job satisfaction among nurses in Pakistan is still in dark to the scholarly world.
In the local context so far, only two recent studies have pointed to dark features of despotic leadership. For example, the first study by Naseer, Raja, Syed, Donia, and Darr (2016) examined 480 professionals from telecom, banking and education sector for the effect of despotic leadership on performance, organizational citizenship behavior and creativity, supported by leader-member exchange theory, and reported the negative influence of despotic leadership.
Accordingly, leadership effect on job satisfaction may vary according to the leadership style and a weaker relationship was also reported by (Voon et al., 2011) directing to a mediating variable between direct effect of leadership and job satisfaction. Likewise, Nauman, Fatima, and Haq (2018) also reported a negative effect of despotic leadership among 224 booksellers, on work-family conflict through emotional exhaustion. While these findings lack evidence from the healthcare sector and literature is silent over the relationship between despotic leadership effects on job satisfaction among nurses. For which this study is potentially important in the local context of Pakistan.
INDEPENDENT JOURNAL OF MANAGEMENT & PRODUCTION (IJM&P)
For example, Knudsen, Ducharme, and Roman (2009) reported partial instead of full mediation assumption of emotional exhaustion between job resources and turn over. Similarly, Tayfur, Bayhan Karapinar, and Metin Camgoz, (2013) reported weak mediation of 'emotional exhaustion' between 'distributive justice' and turnover, proposing for more assessments.
Therefore, a new mediational aspect of emotional exhaustion is assumed in this study between the relationship of despotic leadership and job satisfaction among healthcare nurses in Pakistan.
Despotic Leadership and Job Satisfaction
The 'job satisfaction' is one's positive gesture of contentment towards job (Warr, Cook & Wall, 1979). A set of psychological, circumstances of physiological, and environment of the workplace enables employees to get specific satisfaction levels in association with job tasks they perform (Hoendervanger et al., 2018). Therefore, a satisfied worker typically depends upon dissimilar reasons and it may vary from the 'satisfaction level' from one part to the 'dissatisfaction level' from the second part of the job (Chen, Sparrow & Cooper, 2016).
Having this argument, job satisfaction is a positive or negative emotional evaluation of one's job satisfaction for influencing factors at work. Such that, emotional exhaustion (Asghari et al., 2016) and ethical leadership issues over subordinates (De Hoogh & Den Hartog, 2008), ventures significant associations detailed below. Aronson (2001, p. 252) referred to the despotic leadership as "leaders who distort the mission and goals of the organization and abuse resources by using them to further their interests. These leaders may secure the acquiescence of subordinates by threatening to and employing manifest force". While, on the other hand, De Hoogh and Den Hartog (2008) maintained that an ethical side of leadership is well focused, ignoring the destructive aspects of leadership, leaving a vast gap for research that is less examined in literature (i.e. despotic leadership).
As, leaders have the power to change the perceptions of followers (Piccolo & Colquitt, 2006) through their behavior. It is important to take a leadership effect on job satisfaction into account. Likewise, on a recent critical note, the global agenda council, in their outlook at top trends of 2015 globally, in general, found 86% of respondents agree that there is a leadership crisis (Shiza, 2015). As the negative effect of despotic leadership was reflected by Nauman, http://www.ijmp.jor.br v. 12, n. 1, January-February 2021ISSN: 2236-269X DOI: 10.14807/ijmp.v12i1.1344 Fatima, and Haq (2018) on employee life satisfaction. This prompts a clear concern if the leader possesses a negative effect on job satisfaction which is addressed in this research with the development of hypothesis:
Despotic Leadership and 'Emotional Exhaustion'
Alharbi (2017) that 'leadership style' is a strong predictor of nurses 'job satisfaction'.
The previous literature has shown that offensive supervision (i.e. workplace stressor) is linked to 'emotional exhaustion'. As the wave of destructive supervisor-subordinate interaction is still felt, in the past few years' steady growths in the literature focusing on potentially the dark side of the leadership features (Conger, 1990;Schaubroeck et al., 2007). Thus, the negative effect of despotic leadership being offensive works as a workplace stressor and would directly induce emotional exhaustion among employees (Aryee et al., 2008).
Accordingly, lack of positive leadership acts of support from supervisors leads to emotional exhaustion among employees (Mulki, Jaramillo & Locander, 2006). Sadly, these negative behavioral aspects of leadership exhibited by despotic leadership were intense and indicated in the local contexts by Nauman, Fatima, and Haq (2018) such as despotic supervision resulted in increased emotional exhaustion among 224 booksellers. Thus, the discussion leads to development of the following hypothesis: • H2: 'Despotic leadership' is positively related with 'emotional exhaustion'.
Emotional Exhaustion and Job Satisfaction
Moore (2000, p. 336) described the emotional exhaustion as "depletion of emotional and mental energy needed to meet job demands". 'Emotional exhaustion' is an overload of demands beyond one's time and energy (Boles, Johnston & Hair, 1997) as it seizes an individual's chronic and 'work-related-strains at the workplace (Gaines & Jermier, 1983).
The existence of emotional exhaustion in Pakistani nurses is intense and chronic, as observed in military nursing students (Khokhar et al., 2016). They also found that 78.6% of nurses showed mild emotional exhaustion, 20.2% showed moderate emotional exhaustion, and 1.2% showed high emotional exhaustion. Job satisfaction is a positive emotion (Feldman & Arnold, 1985) and it is the positive feeling that an employee has with one's job.
The mediating role of emotional exhaustion
As much as the association of the supervisor's role as a leader is known in the healthcare sector so as their bad behavior towards their employees. While, considering despotic leadership in 'affective events theory' (Weiss & Cropanzano, 1996) context serves as a stressor or negative event at the workplace. Hence, the supervisor's stroppy attitude and lack of feedback to employees is a negative event at the workplace and a significant factor of exhaustion (Maslach et al., 2001). Also, a significant linkage between 'emotional exhaustion' and low 'job satisfaction' among nurses also established by Zhang, You, Liu, Zheng, Fang, Lu ... and Wu (2014).
While, different leadership styles based on their behavioral aspects had different influence over subordinates 'job satisfaction' (Voon et al., 2011). For example, the dark side of leadership by portraying destructive aspects of leadership that have negative effects (Schyns & Hansbrough, 2010) on 'emotional exhaustion' (Nauman, Fatima & Haq, 2018) ultimately lowering 'job satisfaction' (Tepper, 2000;Hur, Kim & Park, 2015). These arguments lead to the following hypothesis: http://www.ijmp.jor.br v. 12, n. 1, January-February 2021ISSN: 2236-269X DOI: 10.14807/ijmp.v12i1.1344 • H4: Emotional exhaustion mediated the relationship between despotic leadership and job satisfaction Thus, keeping these pieces of evidence, the relationship of despotic leadership with job satisfaction and mediating role of emotional exhaustion Figure 1 shows the hypothesized research framework.
Population and Sample Size
The target population of 1630 nurses working in district hospitals of the public healthcare sector in the Sindh province of Pakistan was focused to address the phenomenon under study. A sample of 310 was estimated following Krejcie and Morgan (1970). The data from the sample was collected by distributing 484 questionnaires from 24 district hospitals with random stratification based on the number of beds available in each hospital following Gok and Sezen (2013). Subsequently, a total of 315 questionnaires were returned at a 65% response rate out of which 265 were usable.
Measurement
In this study, the despotic leadership variable is measured by 6 items (5-point Likert, 1strongly disagree to 5-strongly agree) scale which was adapted from De Hoogh and Den Hartog (2008) However, the job satisfaction measured through a 15 item(s) scale Lickert type scale fixed between 1 completely dissatisfied to 7 completely satisfied adopted from Warr, Cook, and Wall (1979). The α-value of the original scale was 0.85, recently used in the study of Koon and Pun (2018) posited α = .892.
RESULTS AND ANALYSIS
A total of 265 questionnaires were useable which were used for screening through SPSS for the analysis. PLS-SEM results are less contradictory than regression analysis when it comes to indirect and mediating variable effects (Ramli, Latan & Nartea, 2018) which has also been http: //www.ijmp.jor.br v. 12, n. 1, January-February 2021ISSN: 2236-269X DOI: 10.14807/ijmp.v12i1.1344 applied for the current study for its handling with not normal data. For evaluating the measurement model, the researcher must determine individual item reliability, convergent validity, and discriminant validity values (Nunnally & Bernstein, 1994;Hair et al., 2016). Therefore, the following tests were applied:
Measurement Model -Convergent validity
According to Hair, Hult, Ringle, and Sarstedt (2016), convergent validity measures the correlation of one variable with the other variable. Therefore, factor loadings, composite reliability (CR), and average variance extracted (AVE) must be checked. Following Chin (1998) suggestions the factor loadings were above 0.6 (see Table 1), AVE was above 0.5, and CR values were also above 0.7 (see Table 2).
Discriminant Validity
The distinctiveness among the variables is called the discriminant validity for which Hetero-Trait-Mono-Trait (HTMT) was measured following Heselner et al. (2015) guidelines.
Structural Model Testing
The structural model assessment was done on Hair et al., (2016) recommendations through the bootstrapping procedure with 5000 bootstrap sample on 265 cases to indicate the significance level of path coefficient of the direct and indirect hypothesized relationships (see Table 3) which details that the 'despotic leadership' was assumed to have a negative better precision for mediation model estimation. Table 3 reveals the despotic leadership mediated the negative effect on job satisfaction (β = -0.153, t = 4.453, p < 0.000).
DISCUSSION
Building over the AET, all hypothesized relationships were tested and found support.
The results were also in logical flow in terms of the hypothesized framework. That supported the author's argument about despotic leadership features not only existed in Pakistan but also had a negative influence on job satisfaction among nurses which was reported through their response. Through the lens of past literature, emotional exhaustion being a negative emotion, threatened the emotional resource of employees and escalate emotional exhaustion which in turn mediated through emotional exhaustion on reducing job satisfaction. These relationships are sequentially expanded not only by AET but also by contributing towards COR theory by Hobfoll (1989).
Further, recently, Alola, Avci, and Ozturen (2018) study 329 five-star hotels in Nigeria that accounted for supervisors causing emotional exhaustion among employees. Followed in the local context by studies of Khokhar, Chaudhry, Bakht, Alvi, and Mohyuddin (2016) found 72% of nurses showed emotional exhaustion caused by their supervisors. Thus, this study fulfills another unexplored relationship between despotic leadership and emotional exhaustion in light of past studies.
Since, job satisfaction is also an emotion of contention with one's job (Spector, 1985).
It is backed by AET, emotional exhaustion is a negative event and meant to have a significantly strong negative relationship with job satisfaction. Emotionally exhausted workers often feel helpless, lose self-esteem, and feel a lack of accomplishment (Cordes & Dougherty, 1993;Moore, 2000). This argument found among Chinese nurses who revealed the strong association of emotional exhaustion with lower job satisfaction (Zhang et al., 2014).
Scholars elaborated on the unclear mediating role of emotional exhaustion with respect to employee job satisfaction (Halbesleben & Bowler, 2007;Khokhar et al., 2016). Thus, the results of this study subsidized not only to affirm the basic assumptions of AET but also expands the theoretical knowledge in terms of despotic leadership, emotional exhaustion, and job satisfaction among nurses of the healthcare sector from the local context of Pakistan.
Practical Implications
This study was conducted in public health care sector hospitals where reluctance found in patients regarding public hospitals and private clinics were preferred. Implications of this study will not only increase the profitability but also boost hampered government attention over public hospitals in Sindh. This study not only addressed this crucial issue but also provided http://www.ijmp.jor.br v. 12, n. 1, January-February 2021ISSN: 2236-269X DOI: 10.14807/ijmp.v12i1.1344 the most needed and efficient remedy for government officials which are easy to identify and ready to implement. Besides that, job satisfaction was a major issue in public hospital nurses reported in many examinations and reports.
INDEPENDENT JOURNAL OF MANAGEMENT & PRODUCTION (IJM&P)
Results of this study elaborate problem of job satisfaction as respondents reported and is still a major issue and mainly influenced by despotic leadership features of supervisors and emotional exhaustion. The above discussion and results summarized that 'job satisfaction' among nurses working in public hospitals is directly and indirectly affected by the negative events created by despotic leadership which ultimately mediate negative consequences towards job satisfaction.
The emphasis can be made by HR management to assess well before deploying any supervisor in the place of the leader. Therefore, attention should be focused on ways to nurture job satisfaction among nurses by substitute emotional grievance through training, socialization, and issue recognition in public healthcare hospitals.
Limitations and way forward
This study followed a cross-sectional design and limited in terms of time, resources, and scope. Therefore, future research may consider longitudinal design for responsive confirmation of the hypothesized relationships. Secondly, self-reporting was implied which can also be considered as a limitation of the study which may have inflated the relationships among variables as the randomly nominated participants can be predisposed due to the emotional state, attitude, and behavior.
Though, the current study attempted to minimize this issue by ensuring anonymity and improvement of the selected scale (Podsakoff, Mackenzie & Podsakoff, 2012). To do so scale items were simplified in terms of words, answering formats, and written in clear language.
Thus, future studies may employ other strategies that claim generalizability for 'despotic leadership', 'emotional exhaustion' as a mediator on 'job satisfaction' in other fields such as, public-private banking, education, insurance, tourism, and hotel industries.
Conclusion
The results of this revealed that despotic leadership affects job satisfaction negatively and increases emotional exhaustion concerns that further this negative influence of leaders deploying deteriorating job satisfaction among employees. The present study supported the assumptions of AET and expanded the literature towards understanding the issue of 'job http://www.ijmp.jor.br v. 12, n. 1, January-February 2021ISSN: 2236-269X DOI: 10.14807/ijmp.v12i1.1344 satisfaction' among nurses in the 'public sector' hospitals in Pakistan. This study also fulfilled research gaps and a paved path for further research explorations.
|
2021-05-10T00:02:51.090Z
|
2021-02-01T00:00:00.000
|
{
"year": 2021,
"sha1": "c1076b1a77bf4ea526e382415304e4d5b040d0e8",
"oa_license": "CCBYNCSA",
"oa_url": "http://www.ijmp.jor.br/index.php/ijmp/article/download/1344/1956",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "28845506e0360d3f0d6a25cde5bfb12644f50059",
"s2fieldsofstudy": [
"Psychology",
"Medicine"
],
"extfieldsofstudy": [
"Psychology"
]
}
|
55543051
|
pes2o/s2orc
|
v3-fos-license
|
Ineffective programme management on the delivery of health infrastructure projects : A case of the Northern Cape
Programme management remains a challenging management practice in the Northern Cape Department of Health (NCDoH), particularly when a health facility project has to integrate the components of construction management and operations management in order to attain the benefits of strategic importance. The Northern Cape Department of Health consists of various administrative programmes that are supposed to work together in order to attain the benefits of strategic importance. The inability to integrate construction management and operations management is attributed to poor programme management coordination within the Northern Cape Department of Health. This article reports the findings of a case study which determined how programme management coordination among the administrative programmes in the Provincial Office of the NCDoH, Z. F. Mqcawu District Office and the hospital that underwent revitalisation could be improved during the construction of a health-care facility. Data was obtained through interviews with personnel in the three sectors (provincial office of the NCDoH, district office of the Department of Health, and the hospital that underwent revitalisation) directly involved in the delivery of the infrastructure component of the project and preparations operationalisation of the health facility after completion and handover. The results of the study revealed the inability by the NCDoH to integrate both construction management and operations management, due the poor programme management coordination when a health facility project serves as a means for the delivery of health services after handover. Furthermore, the research revealed, among others, functional silos, lack of skills and knowledge for the identification of the critical success factors relevant for integration of Lesetja Mabona & Winston Shakantu
Introduction
Integration of construction management and operations management remains a challenge during the implementation of a health care facility project in the Northern Cape Department of Health.The inability to integrate the two concepts compromises the Department of Health to attain the programme management benefits aimed through the implementation of the Hospital Revitalisation Programme (HRP).The National Department of Health established the Hospital Revitalisation Programme (HRP) in 2003 to rationalise hospital health facilities, health technology, organisational development, and quality assurance in health services.The HRP serves as a response to the policy directives of the Reconstruction and Development Programme.According to the RDP, the "key to this link is an infrastructural programme that will provide access to modern and effective services like health, water and education" (South Africa, 1994: 10).The rationalisation of health-care facilities through the HRP developed five components that can be allocated in two categories, i.e. construction management and operations management.The HRP's programme management approach aligns its components to the delivery objectives of the administrative programmes in the Department of Health.The inability to integrate the delivery objectives of the administrative programmes with the components of the HRP negatively affects the success of the programme.
The integration of infrastructure and services as required by the RDP cannot be achieved if an institution does not have an effective programme management plan.The attempts to integrate infrastructure and services in the Northern Cape Department of Health (NCDoH) do not yield the benefits.The components of the HRP are implemented separately from the delivery objectives of the other administrative programmes in the NCDoH.In order to attain an integrated and cross-functional approach, the HRP developed a Project Implementation Manual that outlines the implementation processes.The inability to align the delivery objectives of the administrative programmes in the NCDoH with the components of the HRP also delays the implementation of the project.With poor programme management coordination the department develops functional silos that are unable to synchronise the integrative metho dological approaches from construction management and operations management and their critical success factors relevant in programme management.The functional silos are the different functional structures in an organisation that focuses primarily on their immediate delivery objectives rather than contributing to the objectives of the entire organisation (Parker & Byrne, 2000: 503).The functional silos build up internal competition and make administrative programmes lose focus on the entire organisational context and strategic objectives (Miller, Wroblewski & Villafuerte, 2014: 10).The research, therefore, contends that there is sub-optimal programme management coordination in the NCDoH.The aim of this research is to determine how programme management coordination among the administrative programmes in the Provincial Office of the NCDoH, Z. F. Mqcawu District Office and the hospital that underwent revitalisation could be improved during the construction of a health-care facility.
The following section presents a contextual background of the NCDoH, with special focus on the outsourcing of construction management, purpose of administrative programmes (in particular, Programme 8 [see Table 1]), the reporting and purposes of the administrative programmes on the organisational structure at executive management level, and the programme management structure of the HRP.
Contextual background on the NCDoH and HRP
The construction of health-care facilities in the Northern Cape is outsourced to the Department of Public Works and the Independent Development Trust.The delivery objectives of these institutions include, among others, provision of infrastructure.The construction of Dr Harry Surtie Hospital in the town of Upington was executed by the provincial Department of Public Works.The latter obtains its mandate from the National Department of Public Works whose objectives include the provision and management of infrastructure needs of the user departments, as outlined in the Department of Public Works Strategic Plan: 2012-2016 (South Africa, 2012).Despite the fact that the Department of Health provides funding for the construction of health facilities upon completion and handover, the buildings as immovable asset is transferred back to the Department of Public Works as the custodian of government immovable assets.This makes the Department of Health a user within the health-care facilities.The Government Immovable Asset Management Act (2007) describes a user as a national or provincial department that uses an immovable asset in support of its service delivery objectives.
The delivery objectives of the NCDoH include the provision of quality health-care services, as outlined in the Annual Performance Plan, 2010/11-2012/13 (South Africa, 2010a).In order to attain the NCDoH delivery objectives, seven administrative programmes have been established.Table 1 presents the purpose of each programme which relates to the strategic objective of the Department.Prioritisation of health-care facilities for construction is the joint responsibility of the administrative programmes of the NCDoH.The Service Transformation Plan (STP) of the NCDoH is used for the selection of the new health facility required in a particular district.The purpose of the STP in the NCDoH is to plan an optimal health service delivery package for the needed resources to achieve a sustainable service, as outlined in the Service Transformation Plan, version 3c (South Africa, 2009b).Hence, the HRP Project Implementation Manual requires that, through the STP, the Department of Health should select the most appropriate revitalisation project.The responsibility to develop a required management plan for all the components of the HRP remains the joint responsibility of all the administrative programmes.The implementation of the components of the HRP require an integrative approach from the administrative programmes in the NCDoH, despite the fact that accountability in the organisational structure is in a vertical format, with no visible crossfunctional reporting.This vertical accountability does not oblige the administrative programmes to ensure the successful delivery of a health-care facility project.Hence, some of the components of the HRP that share the same delivery objectives with the administrative programmes in the NCDoH remain neglected and lag behind during the implementation of the programme.Figure 1 shows the levels of authority in the NCDoH at executive management level.Programme managers on this structure report to the Head of Department who reports to the Member of the Executive Council.The directorates that exist below the administrative programme execute its objectives as stated in the purpose of that particular administrative programme.This organisational structure does not have a provision for administrative programmes to establish a cross-functional reporting among each other.Neither the strategic plan nor the annual performance plan of the NCDoH makes reference to shared objectives among the administrative programmes, irrespective of the intervention that the new strategic objective brings.The HRP was developed on a programme management methodology and finds its relevance to the objectives of the administrative programmes of the NCDoH.The components of the HRP are infrastructure management, organisational development, quality assurance, health technology, monitoring, and evaluation.These components are meant to be managed together in order to achieve the objective of the HRP.The implementation of the components of the hospital revitalisation programme in a health facility project enhances the strategic benefits from core delivery objectives of each administrative programme in a department through a programme management approach.These components are implemented as projects, although they are meant to be managed together in order to achieve the objectives of HRP.The programme management approach enables the "process of managing multiple interdependent projects that lead towards an improvement in an organisation's performance" (Mittal, 2009: 1).Pinto & Kharbanda (1995: 73-74) point out that "projects serve as the conduit for implementing top management's plans, or goals, for the organization".The coordination of multiple projects through programme management enables the achievement of the benefits of strategic objectives.According to the Standard for Program Management (2013), components within a program are related through a common outcome or delivery of a collective set of benefits.The components of the HRP focus on achieving benefits of strategic objectives in the most appropriate revitalisation project, as outlined in the Hospital Revitalisation: Project Implementation Manual, 2010-2011(South Africa, 2010b: 27).The HRP is based on the principles of programme management, which confine it to the delivery of the "benefits and capabilities that an organisation can use to meet and enhance strategic objectives" (Sanghera, 2007: 93).In order for HRP to ensure that its intended goals are attained, it developed a Project Implementation Manual (PIM), which is revised annually by the Project Management Forum.The HRP remains relevant to deliver the objectives of strategic importance in government, based on its programme management methodology.Morris & Pinto (2004: 266) contend that there are two characteristics that make programme management the most suitable methodology to ensure successful implementation of strategies, namely "the fact that it is a cyclic process, which enables regular assessment of benefits", and "the emphasis on the interdependencies of projects, which ensures strategic alignment and delivery of strategic benefits".
Health
The research problem states that there is poor programme management coordination among the administrative programmes in the provincial office of the NCDoH, Z. F. Mqcawu District office, and Dr Harry Surtie Hospital during the construction of the Dr Harry Surtie Hospital.The construction of this hospital required that all the components of the HRP be implemented so that the NCDoH can attain programme benefits.The implementation of all the components of the HRP requires an integrated approach at all levels of administration and the administrative programmes in the NCDoH.The delivery of health-care facilities, in particular the hospitals, is based on the programme management methodology of the HRP.
In the following section, relevant literature relating to the research problem is explored in order to explain concepts in both construction management and operations management that are relevant to the programme management methodology.These concepts, i.e. integrated management, systems approach, benefits management and realisation, critical success factors, and continuous improvement, may not have been combined in order to discuss their interrelatedness in the construction of a health-care facility.Hence, there is the challenge to deliver a successful programme in the NCDoH.
Literature review
In this section, the researchers present the concepts that form the core discussion on the research problem.In order to understand the importance of programme management, the researchers explain what it is and its importance in an institution in attaining the benefits of strategic objectives.The systems approach in the programme management methodology is also explored wherein the concept of integrated management for construction management processes and operations management processes become relevant.Programme management is defined as the centralised coordinated management of a specific programme to achieve its strategic goals, objectives and benefits (Sanghera, 2008: 3).A component of programme management, i.e. benefits management and realisation, finds its relevance for the attainment of the benefits of an organisation's strategic objectives through the programme management methodology.Hence, the need to identify and synchronise the critical success factors from both construction management and operations management during the development of a programme management plan.Continuous improvement becomes important during the development of a programme management plan, as its absence might make the organisation loose the ability to support the long-term strategy and delivery mandate.The Infrastructure Delivery Management System (IDMS) of 2010 emanates from systems approach and finds its relevance in the perspective of construction management for the delivery of health-care facilities (South Africa, 2010c).The IDMS is limited to construction management processes, i.e. from planning to maintenance of the buildings.
Programme management
Programme management "is concerned with optimising project benefits in symbiotic fashion and with integrating project elements at the programme level" (Cloete, Wisssink & De Coning, 2006: 218-220).According to the Standard for Programme Management (2013), programme management harmonizes its projects and programme components and controls interdependencies in order to realise special benefits.This statement is supported by Levin & Green (2013: 486) who claim that "programmes take account of the benefit realisation as they are designed to last as long" as the benefits are satisfactorily realised.Williams & Parr (2004: 31) suggest that the enabling factor in programme management is its ability to carry out multiple elements that can be managed separately, "but sequencing of implementation and management of critical dependencies that require a level of management coordination over and above that at the individual project level".
Levin & Green (2013: 39) state that "programmes are established to achieve benefits that may not be realised if their components were managed individually".Hence, a programme serves as a means to achieve multi-level benefits that cannot be achieved if a single project is deployed.There is a difference between an administrative programme and a project-related programme.An administrative programme in the NCDoH consists of various directorates and units under the leadership of a functional manager.A project-focused programme consists of various components that are intended to achieve a common strategic or business goal.The coordinated management processes of programme management require the application of knowledge, skills, tools, and techniques to meet the programme requirements and to obtain benefits and control not available by managing projects individually (Standard for Programme Management, 2013).Since programme management exists at strategy-formulation level, its coordination involves decision management, governance, stakeholder management, and benefits management (Thiry, 2010: 59).Therefore, programme management serves as "an implementation tool that delivers organisational benefits resulting from aligned corporate strategies, business-unit, and operational strategies.It facilitates coordinated and integrated management of cross-functional portfolios of projects and normal operations that bring about strategic transformation, innovative continuous improvement and customer service excellence in organisations, with the aim of achieving benefits of strategic importance" (Steyn & Schmikl, 2008: 4).Programme management "success is measured by the degree to which the program satisfies the needs and benefits for which it was undertaken" (PMBoK Guide, 2008: 9).
Construction management
The South African Council for the Project and Construction Management Professions (SACPCMP) (South Africa, 2000) defines construction management as "the management of the physical construction process within the built environment and includes the co-ordination, administration, and management of resources".Similarly, Dykstra (2011: 376) explains that construction management involves the processes of coordinating, monitoring, evaluating, and controlling of construction activities.These set out the construction management parameters.In addition, construction management embraces "activities from conception to physical realisation of a project" (Gahlot & Bhir, 2002: 1).In the Department of Health, the processes that lead to the physical realisation commence from the identification of a need incorporated as a strategic objective.The fact that provision of health-care services has to take place in a constructed structure requires the Health Facilities Management Programme to engage the end users in the development of requirements.The infrastructure component of the programme management plan incorporates the construction management processes in order to execute its objectives.As mentioned earlier, the construction management processes include the "effective planning, organising, application, coordination, monitoring, control, and reporting of the core business processes" (Harris, McCaffer & Edum-Fotwe, 2013: 1).
The successful delivery of the infrastructure component of the healthcare facility does not signify the complete delivery or programme closure.The implementation of a health-care facility at project or component level aligns the processes of planning, organising, application, coordination, monitoring, control, and reporting found in construction management.
The project conceptualisation stage in construction management involves the end users and the design team.This stage requires immense stakeholder and communication management.The approach to building construction in the NCDoH follows the appointment of an implementing agent.The latter appoints a team of professional service providers to design and produce a bill of quantities for procurement purposes for the building infrastructure component.The programme manager responsible for HFM appoints a project manager who is responsible for construction management processes.It is the responsibility of the programme manager from the NCDoH to ensure that there is proper coordination of the construction requirements with the programme manager from the implementing agent.
According to Gahlot & Bhir (2002: 3), coordination in construction management involves integrating the work of various departments and sections.This requires proper integration of all project-related activities and disciplines such as architect, mechanical, electrical, civil and structural engineering together with quantity surveying.The implementation of the disciplines involves the construction project manager, the contractor, subcontractors and professionals in each discipline.Therefore, construction management enables the integration of project activities into the main project.Coordination in construction management is not only about the work produced from other disciplines for the built industry, but also about ensuring that the needs of the end user are incorporated and that, upon completion of the construction work, the buildings shall increase efficient delivery of the health-care services.In order for construction management to be coordinated with other components, a systems thinking approach needs to be applied.The IDMS has been established to address the systems thinking for the built environment, but there is still a gap to find the synergy with the operations management processes.The implementation of the infrastructure management components experiences variations to scope as a result of inadequate and late engagement of subject matter experts and incomplete requirements during the compilation of a requirements plan.
Operations management
The factors that relate to operations management in this research are categorised as outlined in the PMBoK Guide (2013) under enterprise environmental factors and organisational process assets such as organisational culture, stakeholder management, and performance management.According to Stevenson (2009), operations management involves management of systems or processes that create goods and provide services.The inputs in operations management are human resources, processes and information, whereas the outputs are in the form of goods and services (Reid & Sanders, 2010: 3).Operations management in the delivery of a health-care facility project enables planning, organising, coordinating, and controlling of the resources required to produce the goods and services.The evaluation of the implementation of the factors relating to operations management is linked to the programme implementation period.
Integrated management
The integration of construction management components and operations management components during the implementation of a health-care facility should still take into account programme management methodology.In order to achieve an integrated approach, this requires a "collaborative process, which emphasises constructive relationships" at programme management level (Thiry, 2010: 66).In an administration-focused organisation, programme management concentrates on activity monitoring, while in an integration-focused organisation, a project is a means to attain business strategy.In an integration-focused institution, the primary goal of programme management is "integration and synchronization of workflow, outcomes and deliverables of multiple projects to create an integrated solution" (Martinelli, Waddell & Rahsculte, 2014: 13).This makes coordination in programme management an essential aspect.Programme management acknowledges that components operate as a system.Grady (2007: 7) explains a system as "a collection of things that interact to achieve a specific purpose".Integration also "means completeness and closure, bringing components of the whole together in an operating system" (Barkely, 2006: 3).A system creates interdependence between various components; therefore, the greater the interdependence, the greater the need for cooperation (Castellano, Roehm & Hughe, 1995: 25).A "true integration ties all components of the organization into one coherent system where all activities, whether implemented together or individually, are focused on achievement of overall goals and are ultimately the guiding mission of the organization" (Pardy & Andrews, 2010: 13).Integrated management approach cannot be achieved without cross-functional management.
Integration is achieved through shared norms and values within an organisation (Burke, 2014: 58).The establishment of a cross-functional approach is determined by the culture existing in the organisation.
A cross-functional management helps the organisation to "improve both the factual basis of the strategic planning process and the chances of successful implementation of the final plan" (Jackson & Jones, 1996: 12).Therefore, an organisation cannot succeed if it seeks to maintain what may be regarded as "functional silos" within its operations.
The building infrastructure is a main product in construction management in a health facility project."Integration is essentially the major function of program management, running several projects simultaneously and using all the support systems of the organisation" (Barkely, 2006: 13).
In order to respond to the aim of the research, construction ma nagement and operations management were studied in detail, although in relation to other components that relate to programme management.Steyn & Schmikl (2008: 4) explain that programme management "coordinate[s] and integrate[s] management of crossfunctional portfolios of projects and normal operations that bring about strategic transformation, innovative continuous improvement and customer service excellence in organisations, with the aim of achieving benefits of strategic importance".In the delivery of a health-care facility, three variables, namely critical success factors, continuous improvement, and construction management, have an ability to make an organisation succeed or fail in attaining a successful programme.These variables should be taken into account by an organisation, in this case the NCDoH, when a project requires the contribution of other programmes for the successful delivery of a health-care facility project.An institution's inability to find a logical relationship on the three variables disables administrative programmes from determining success in the delivery of healthcare facilities project and leaves an organisation defining complete building infrastructure as the successful programme management while excluding all other components that are supposed to be implemented to finality.The Project Implementation Manual (PIM) states that, "in order to manage the integration process at provincial level, a Provincial Steering Committee must be formed for the overall coordination of provincial projects" (South Africa, 2009a: 22).The exit by the hospital revitalisation programme from a health-care facility assumes that all the operational systems will continue through an integrated management system in order to deliver an improved quality and sustainable health service.Pardy & Andrews (2010: 4) argue that "an effectively implemented integrated management system aligns policy with strategic and management system objectives and provides the framework upon which to translate these objectives into functional and personal targets".
Systems approach
The Infrastructure Delivery Management System (IDMS) was introduced to provide a model for the delivery of public service infrastructure projects through management companions that include portfolios, projects and operations management that takes into account construction management processes (CIDB, 2012: 6).
The Construction Industry Development Board (CIDB), upon which the IDMS (2010: 8) was established, mentions that government infrastructure delivery departments lost efficiency in integrating resources under portfolios and programmes of coordinated projects.Despite the systems approach of the IDMS through its management companions, the challenges of delivering a successful programme by the NCDoH still remains.The systems approach consists of three elements, namely inputs, processes, and outputs (Gardiner, 2005: 23).These elements collaborate to achieve the main goal of an institution.Cloete, Wisssink & De Coning (2006: 218-220) explain that the "institutionalization of a programme and project management approach in government in order to ensure integrated service delivery, has also proven problematic because of the lack of appropriate systems".The systems approach "defines the relationships between the various parts of the organisation with each other and with the outside environment, and establishes how these relationships work and lastly, it establishes the purpose of these relationships" (March, 2009: 25).The importance of systems approach in programme management helps create interdependence between various components; therefore, the greater the interdependence, the greater the need for cooperation, communication, and leadership (Castellano, Roehm & Hughes, 1995: 25).The "subsystems such as corporate mission, strategic objectives, organizational functions, organizational structure, critical processes, and the programme exist to effectively and efficiently convert the business inputs into the desired outputs" (Milosevic, Martinelli & Waddell, 2007: 57).
Benefits management and realisation
The Standard for Programme Management (2013) indicates that programme benefits may be realised incrementally throughout the duration of the programme, because they are a result of the executed organisational goals and objectives.The benefits "are the tangible business improvements that support the strategic objectives measured at operational level" (Thiry, 2004: 77).The benefits of strategic objectives are realised when the programme becomes aware of the advantage gained as a result of engaging in a particular programme.In order to realise the benefits of strategic objectives, the institution has to clean up any barriers that may arise as a result of a poorly coordinated programme management approach.In this way, an institution develops new capabilities of realising operational benefits.Benefits management is inextricably linked to critical success factors (CSFs), because the benefits management phase, i.e. benefit identification, in particular, requires the identification of the CSFs for a programme.Benefit realisation is defined as the process of realising actual outcomes by breaking down strategic objectives via programme components or projects, then monitoring the outputs to confirm that the intended benefits have, in fact, been achieved (Bradley, 2006: 20).The benefits realisation approach enables an organisation to understand and address the human aspects of the project, including resistance to change, training needs, and new ways of working.This requires the development of a benefits realisation plan that outlines the "details of the expected benefits to be realised and how these benefits will be achieved" (Thiry, 2010: 110).
Critical success factors
In order to realise the benefits of strategic objectives, the CSFs in both operations management and construction management should be identified.Some of the factors are common, while others vary according to management requirements.According to Mendoza, Perez and Griman (2006: 56), the CSFs represent a set of a "limited number of areas in which the results, if satisfactory, will guarantee successful competitive behaviour for organisational objectives".The simplicity with the identification of the CSFs is that they can be "expressed as a qualitative statement" and only quantified for assessment purposes in a form of key performance indicators (Thiry, 2010: 113).The quantification of the CFSs requires the involvement of all the stakeholders from other related programmes.The research, therefore, tends to raise a question: What methodology can be implemented to enhance process integration among the health administrative programmes?This question contributes to responding to the following research question: How could coordination among the administrative programmes in the provincial office of the NCDoH, district office of the Department of Health and the hospital that undergoes revitalisation during the implementation of a health facility project be improved?Dobbins (2001: 48) mentions that "developing a process by which managers could identify their CSF… teaches managers how to think in terms of CSF" during the management of the project on site.The identification of critical success factors becomes irrelevant if there are no programme objectives.The benefits of programme management come from a "co-ordinated change management, governing the mutual dependencies between projects and activities, and a central focus on realizing the benefits" (Hedeman & Van Heemst, 2010: 16).
Continuous improvement
An organisation operating on a programme management approach needs to continuously improve and "without such improvement, the program management discipline will gradually deteriorate, losing the ability to support the business strategy of the organization" (Milosevic, Martinelli & Waddell, 2007: 455).Although continuous improvement puts more focus on commitment by senior management in an organisation, better results are achieved when an organisation gets commitment of all staff members within it.According to Turnbeaugh (2010: 42), the "management commitment aspect centers around a continuous improvement methodology of plan, do, check, act; a methodology that has transcended the use of TQM and has been integrated into other improvement-oriented procedures as well".This methodology emanates from the Edward Deming philosophy on continuous improvement.Continuous improvement cannot take place if an organisation lacks commitment by programme managers.In order for an organisation to effectively conduct continuous improvement processes, there should be a "culture in which individuals and groups take responsibility for continuous improvement based on common understanding of organisation's goals and priorities" (Sahu, 2007: iii).
Research methodology
In order to choose a relevant research method for this research, an inductive reasoning was applied.Collis & Hussey (2009: 8) note that the theory in inductive reasoning "is developed from the observation of empirical reality".Inductive reasoning is based on two premises, i.e. the case and the characteristics of the case.Both these premises enable the researcher to develop conclusions through generalisation and to develop new thoughts.The "case study method allows investigations to focus on a case and retain a real-world perspective" (Yin, 2014: 4).A positivist and interpretivist approach was followed to examine the research subjects' understanding of the phenomena and their motivations (Porta & Keating, 2008: 13).This made the research relevant for the selection of the case study method due to its descriptive, inductive and heuristic approach (Somekh & Lewin, 2012: 54).Furthermore, due to the fact that the research problem focuses on a social reality, the case study method is able to ask the question as to what is going on with the phenomena to be able to generate intensive investigations for the development of subjective data (Burns, 2000: 460).As a result, a qualitative research method was applied and helped the researchers study the attitudes and behaviours of the research subjects within their natural settings (Babbie & Mouton, 2010: 270).Furthermore, the qualitative research is more concerned with the "greater depth with a relatively small number of participants in order to enhance the quality of the response through interpretative methods, unstructured and semistructured interviews" (Garner, Wagner & Kawulich, 2009: 63).
This enables the researchers to evaluate the operational relations at programme management level in the NCDoH on construction management and operations management.Based on the research problem, i.e. sub-optimal programme management coordination within the NCDoH, this has prompted the researchers to apply the case study method in order to identify and investigate three components of the case.The case for the research study is the NCDoH.There are three components of the case, i.e. the Provincial Office made up of various administrative programmes, the Z. F. Mqcawu Health District Office and the Dr Harry Surtie revitalised hospital.The research subjects in these components are salient to the enquiry.
Sampling method
The research sample focused on the three levels of administration in the NCDoH, i.e the administrative programmes in the provincial office of the NCDoH, the health district office of the Department of Health at Z. F. Mcqawu, and Dr Surtie Hospital.The purposive or judgemental sampling method was utilised.Project commissioning team, project management team, end user staff in a health-care facility were sampled.The choice of this sampling method was influenced by the aim of the research and the line of enquiry pursued by the research study, the investigative question and literature reviewed.
Purposive sampling "enables diversity and ensures that units of analysis are selected as they hold a characteristic that is salient to the research study" (Ritchie, Lewis, Nicholls & Ormston, 2014: 143).Purposive sampling helped "access knowledgeable people, i.e. those who have in-depth knowledge about particular issues" (Cohen, Manion & Morrison, 2007: 115).
Sampling size
Through purposive sampling, the research targeted the "most visible leaders", i.e. the managers in the three units of analysis.The application of this purposive sampling included a snowball sampling approach whereby the people who were approached at the initial stage of data collection assisted the researchers to identify other managers relevant to the research study (Babbie & Mouton, 2010: 167).As a result of the purposive sampling method, data was collected from end user staff members in the three sectors of the NCDoH, project commissioning team and project management team.The data was collected from forty-five (45) respondents up to the stage where the level of saturation of the data was reached by the researcher (Maree, 2007: 82).Ritchie & Lewis (2003: 80) explain that the saturation in purposive sampling occurs when data collection does not yield any other new or relevant data.Thirty (30) managers responded to the interview from the provincial office, five (5) managers are from the Z. F. Mqcawu, and ten (10) managers are from Dr Harry Surtie Hospital.
Data collection
An interview guide was used to collect empirical data.The interviews were held with individuals in the Provincial Office of the Department of Health, the District Office of the Department of Health of Z. F. Mqcawu, and Dr Harry Surtie Hospital.The semi-structured interview guide assisted the researchers to maintain consistency and the logical flow of the questions.Nine (9) questions based on the objective of this research were generated.The interview guide was structured as follows: Question Intent Questions 1-2 and 8-9 Test the respondents on the application programme management methodology Question 3-4 Focus on the programme performance management Question 5-7 Address the respondents on cross-functional approach Probing questions were asked during discussion with the interviewees in order to obtain further information.An average of forty (40) minutes was spent in conducting each interview.Prior to utilising the research instrument, it was discussed with four people at different units in the NCDoH to determine whether interpretation of concepts would be obtained.
The research instrument
In order for the researchers to obtain respondents' perceptions with regard to the application of programme management methodology in the NCDoH, qualitative research questions were generated.The questions in the research instrument enabled the researchers to obtain substantial information on the research problem.The research instrument enabled the collection of data based on the perception of people in the three levels of administration in the NCDoH, i.e., Provincial Office, District Office of the Department of Health and a hospital with regard to programme management coordination on the delivery of a health-care facility project.The interview guide was meant to obtain qualitative data from the respondents for the research objective.The use of an interview guide enabled the researchers to be consistent with the questions posed to the respondents.
Response rate
All forty-five (45) respondents identified through a snowball approach in purposive sampling method responded to all the questions presented for discussion.
Data analysis
The application of the inductive analysis of data in qualitative research enabled the researchers to extensively condense raw data into brief and summary format, and to "establish clear links between the research objectives and the summary findings derived from raw data" (Dey, 2005: 55).The data analysis is categorised according to responses from the sectors (programmes in the provincial office, district office of the Department of Health, and the revitalised Dr Harry Surtie Hospital.
Limitation
The limitation to this research study was the uneasiness of some of the employees from Dr Harry Surtie Hospital and Z. F. Mqcawu District Office to provide information about the relationship between the two levels of administration with the provincial office of the NCDoH.Although this limitation existed in certain cases during the interviews, it had minimal impact as the respondents' uncertainties were addressed by the inclusion of a confidentiality clause in the interview guide.
Findings from the case study
The findings from the case study are based on the responses obtained during the interview sessions with the research subjects outlined in the interview guide.
Programme performance management
There are disparities of opinions from the three sectors or levels of administration on whether the objectives of the various administrative programmes and the objectives of the hospital revitalisation programme are aligned.
Responses from the provincial office
Of the respondents, 20% agreed that there is an alignment between the objectives of the various administrative programmes and the objectives of the hospital revitalisation programme; 80% of the respondents disagreed with the above statement for the following reasons.
A decision concerning the prioritisation of a health-care facility is taken at the Provincial Offices of the Department of Health and does not take the inputs from the District Office of the Department of Health, i.e.Z. F. Mcqawu in this case.
The implementation of the hospital revitalisation programme does not take into account the Service Transformation Plan (STP) as the long-term plan of the NCDoH in terms of the health facilities' list of priorities.The choice as to which health-care facility should be constructed is made out of the STP.Any prioritisation that does not consider the priority list as recorded in the STP causes misalignment in terms of other services such as human resources plan, which ultimately affect budget allocation in the Department.A respondent mentions that "there is either no proper communication among (District Office of the Department of Health, revitalised hospital and administrative programmes in NCDoH) or absence or lack of co-ordination between their project managers".Furthermore, it was reported that there is "lack of commitment from programme managers and lack of knowledge" for aligning the existing delivery objectives of the administrative programmes when a new strategic objective has been incorporated.
Responses from the District Office of the Department of Health
Of the respondents, 40% agreed that the District Office of the Department of Health, revitalised hospital and administrative programmes in NCDoH properly align with the objectives of the hospital revitalisation programme.The respondents mentioned that, once a health-care facility project has been prioritised, the District Office of the Department of Health is obliged to align with the objectives of the hospital revitalisation programme, as it must assist in ensuring the success of the project.
Only 60% of the respondents explained that this does not necessarily mean that there is no alignment with the objectives of the hospital revitalisation programme, although they found themselves participating in certain stages of the prioritised health-care facility project.In this instance, the respondents participated in the project commissioning meetings for the delivery of a health-care facility project, but without knowing how that particular activity fits into the objectives of the District Office of the Department of Health.
Responses from Dr Harry Surtie Hospital
The respondents from Dr Harry Surtie Hospital pointed out that they have hardly any knowledge about the objectives of the hospital revitalisation programme, but that they participate in the projectcommissioning meeting to ensure successful delivery of the project, as they are the direct beneficiaries after completion.It was further explained that the staff members from the old hospital do not have much of a role to play at the beginning of the project, but that they get more involved during the preparations to move into the new premises, as they have to be ready to provide health services.Further, the respondents from Dr Harry Surtie Hospital view the alignment as ineffective, because the joint decisions arising from the alignment are not implemented.
Programme management practice in the NCDoH
The research on this input wanted to determine whether all other administrative programmes should incorporate and report on the new strategic objective of the construction of a new health-care facility on components that relate to their normal delivery objectives.
It was found that some of the administrative programmes do not consider the new strategic objective, i.e. construction of a new health-care facility, in their Annual Performance Plans and continue with the implementation of their normal programme objectives; hence, there is insufficient reporting relating to the objective.
At least 40% of the respondents suggested that reporting should be the responsibility of one administrative programme, i.e.Health Facilities Management.The reason for a single point of accountability is that it would help maintain consistency in reporting.This opinion leaves other administrative programmes without responsibility over the prioritised health-care facility project.Of the respondents, 40% mentioned that the administrative programmes that have core responsibilities on certain project parameters are omitted without taking part and that the project is denied an opportunity to engage experts to advise as to whether the project is still focused on attaining the intended goal/s.
Of the respondents, 60% agreed that there is a need for other administrative programmes to be involved in the reporting of progress on the delivery of a health-care facility project, thus enabling all the administrative programmes to monitor their performance against the set objectives.Furthermore, the administrative programmes would be able to determine whether factors that go into operations management and construction management are being addressed and what progress is being made in each.This becomes an added advantage, as administrative programmes would also be able to determine what lags behind in terms of project activities so that rescue measures can be established, if necessary.A joint reporting informs all the stakeholders about progress made and enables preparation of the operationalisation of the facility once the building project is completed.It was also indicated that the administrative programmes that have core responsibility of the components that build up the delivery of a health-care facility project such as organisational development and quality assurance should report on them.A joint reporting by the administrative programmes at strategy development level in the NCDoH would enable continuous development.A respondent indicated that joint reporting "eliminates silo approach, and enables monitoring of commitment at programme management level".Furthermore, reporting by other administrative programmes in the NCDoH "maximises project success", as it puts all administrative programmes on par with project progress and activities to be performed at each stage.
A cross-functional approach in the NCDoH
A total of 33% of the respondents in the three sectors indicated that there is a cross-functional approach at programme management level.At least 44% of the respondents mentioned that there is a crossfunctional approach at the project-commissioning meetings, as they are the ones used by the Health Facilities Management programme to ensure that all the administrative programmes from the Provincial Office of the Department of Health collaborate for the delivery of a health-care facility project.Only 22% of the respondents explained that they have not experienced any cross-functional action at programme management level within the Department.The respondents reported that they have noted that "the department is working in a silo approach" wherein the administrative programmes operate on individual objectives that have no link to each other.
Conclusion
The results of this study show that an institution's inability to consider critical success factors and the existence of functional silos during the construction of a health-care facility has a negative effect on the attainment of the benefits of strategic objectives.
The research revealed that the administrative programmes in the NCDoH perceive construction management and operations management as separate entities during the construction of a health-care facility; hence, the inability to attain a cross-functional approach at programme management level.The administrative programmes do not perceive the construction of a health-care facility as part of the integrated objectives of the NCDoH intended to attain a strategic goal.This enables the development of functional silos within the NCDoH.With the existence of functional silos, the NCDoH would not be able to link an infrastructural programme to provide access to modern and effective health services, as pronounced by the Reconstruction and Development Programmes (South Africa, 1994: 10).
The impact of poor programme management coordination leaves the NCDoH with the challenge of how to describe a successful programme, as the complete building infrastructure alone cannot provide all the required health services as initially intended.This would also defeat the aim of programme management, which is the "coordinated management of a portfolio of projects that assist an organisation to achieve benefits that are of strategic importance" (Gardiner, 2005: 11).
Construction management and operations management separately consist of core elements that relate to the delivery objectives of the administrative programmes of the NCDoH.However, it requires senior management commitment at both strategy development and strategy implementation to integrate, synchronise and create common objectives that attain the programme management benefits.The strategy development stage requires that "portfolio management frameworks consider[s] the dynamism that occur[s] through portfolio balancing; dependent upon a rational, mechanistic and linear process to determine the organisation's strategy and priorities, which in turn allows the balancing function to take place" (Linger & Owen, 2012: 106).At the strategy implementation stage, the NDoH proposes that the "provincial strategic plans must include comprehensive hospital plans, which provide a framework in which business cases are subsequently developed" (South Africa, 2005).
Findings from the research show that the NCDoH has not yet established collaborative elements, in particular, critical success factors, that integrate construction management and operations management in the construction of health-care facilities.This affects the attainment of programme benefits, as the benefits management process requires the identification of the CSFs.This happens as a result of sub-optimal programme management coordination on the implementation of the components of the HRP.
Recommendations
The functional managers in the NCDoH should be afforded an opportunity to participate in the development of a programme charter so that they can also be at liberty to avail resources when project/component charters are developed and implemented.
Executive management in the NCDoH should consider the development of a policy framework on stakeholder management that addresses cross-functional interaction between administrative programmes and other sectors.
The executive management should consider that the CSFs are an integral part of the strategic planning of an institution and need to be included.
Executive management should make it compulsory to administrative programmes to develop a benefit management plan that integrates the objectives of the HRP.
Figure 1 :
Figure 1: Executive management structure in the NCDoH
Figure 2 :
Figure 2: Components of the HRP
Table 1 :
Administrative programmes in the NCDoH Programme 8 is responsible for rendering professional and technical services in respect of buildings and related structures, and to construct new facilities.This includes ensuring compliance with the South African National Standards 10400 for the application of the National Building Regulations, Infrastructure Unit Systems Support (IUSS) guidelines and the Infrastructure Delivery Management System (IDMS) for the planning, design and construction of health-care facilities by the implementing agent.Provision of programme scope according to the components of the HRP remains the responsibility of all other administrative programmes of the NCDoH as the primary users of the buildings after completion and handover.The outsourcing of the construction of the health-care facilities emanates from insufficient personnel capacity in the NCDoH, taking into account the mandate of this Department.
|
2018-12-11T13:20:29.568Z
|
2016-06-01T00:00:00.000
|
{
"year": 2016,
"sha1": "033626c41e5248edd53f4b0b4c43555a07266ad7",
"oa_license": "CCBY",
"oa_url": "http://journals.ufs.ac.za/index.php/as/article/download/1710/1685",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "033626c41e5248edd53f4b0b4c43555a07266ad7",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Engineering"
]
}
|
225072999
|
pes2o/s2orc
|
v3-fos-license
|
Further Analysis of Outlier Detection with Deep Generative Models
The recent, counter-intuitive discovery that deep generative models (DGMs) can frequently assign a higher likelihood to outliers has implications for both outlier detection applications as well as our overall understanding of generative modeling. In this work, we present a possible explanation for this phenomenon, starting from the observation that a model's typical set and high-density region may not conincide. From this vantage point we propose a novel outlier test, the empirical success of which suggests that the failure of existing likelihood-based outlier tests does not necessarily imply that the corresponding generative model is uncalibrated. We also conduct additional experiments to help disentangle the impact of low-level texture versus high-level semantics in differentiating outliers. In aggregate, these results suggest that modifications to the standard evaluation practices and benchmarks commonly applied in the literature are needed.
Introduction
Outlier detection is an important problem in machine learning and data science. While it is natural to consider applying density estimates from expressive deep generative models (DGMs) to detect outliers, recent work has shown that certain DGMs, such as variational autoencoders (VAEs [1]) or flow-based models [2], often assign similar or higher likelihood to natural images with significantly different semantics than the inliers upon which the models were originally trained [3,4]. For example, a model trained on CIFAR-10 may assign higher likelihood to SVHN images. This observation seemingly points to the infeasibility of directly applying DGMs to outlier detection problems. Moreover, it also casts doubt on the corresponding DGMs: One may justifiably ask whether these models are actually well-calibrated to the true underlying inlier distribution, and whether they capture the high-level semantics of real-world image data as opposed to merely learning low-level image statistics [3]. Building on these concerns, various diagnostics have been deployed to evaluate the calibration of newly proposed DGMs [5][6][7][8][9], or applied when revisiting older modeling practices [10].
As we will review in Section 5, many contemporary attempts have been made to understand this ostensibly paradoxical observation. Of particular interest is the argument from typicality. Samples from a high-dimensional distribution will often fall on a typical set with high probability, but the typical set itself does not necessarily have the highest probability density at any given point. Per this line of reasoning, to determine if a test sample is an outlier, we should check if it falls on the typical set of the inlier distribution rather than merely examining its likelihood under a given DGM. However, previous efforts to utilize similar ideas for outlier detection have not been consistently successful [3,11]. Thus it is unclear whether the failure of the likelihood tests studied in [3] should be attributed 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. arXiv:2010.13064v1 [stat.ML] 25 Oct 2020 to the discrepancy between typical sets and high-density regions or instead, the miscalibration of the corresponding DGMs. The situation is further complicated by the recent discovery that certain energy-based models (EBMs) do actually assign lower likelihoods to these outliers [5,6], even though we present experiments indicating that the probability density function (pdf) produced by these same models at out-of-distribution (OOD) locations can be inaccurate.
In this work we will attempt to at least partially disambiguate these unresolved findings. To this end, We first present an outlier test generalizing the idea of the typical set test. Our test is based on the observation that applying the typicality notion requires us to construct an independent and identically distributed (IID) sequence out of the inlier data, which may be too difficult given finite samples and imperfect models. For this reason, we turn to constructing sequences satisfying weaker criteria than IID, and utilizing existing tests from the time series literature to check for these properties. Under the evaluation settings in previous efforts applying DGMs to outlier detection, our test is found to work well, suggesting that the previously-observed failures of outlier tests based on the DGM likelihood should not be taken as unequivocal evidence of model miscalibration per se. We further support this claim by demonstrating that even the pdf from a simple multivariate Gaussian model can mimic the failure modes of DGMs.
Beyond these points, our experiments also reveal a non-trivial shortcoming of the existing outlier detection benchmarks. Specifically, we demonstrate that under current setups, inlier and outlier distributions can often be differentiated by a simple test using linear autocorrelation structures applied in the original image space. This implies that contrary to prior belief, these benchmarks do not necessarily evaluate the ability of DGMs to capture semantic information in the data, and thus alternative experimental designs should be considered for this purpose. We present new benchmarks that help to alleviate this problem.
The rest of the paper is organized as follows: In Section 2 we review the typicality argument and present our new outlier dectection test. We then evaluate this test under a range of settings in Section 3. Next, Section 4 examines the difficulty of estimating pdfs at OOD locations. And finally, we review related work in Section 5 and present concluding discussions in Section 6.
2 From Typicality to a White Noise Test
OOD Detection and the Typicality Argument
It is well-known that model likelihood can potentially be inappropriate for outlier detection, especially in high dimensions. For example, suppose the inliers follow the d-dimensional standard Gaussian distribution, p in (x) ∝ exp(− x 2 2 /2), and the test sample is the origin. By concentration inequalities, with overwhelming probability an inlier sample will fall onto an annulus with radius (1)), the typical set, and thus the test sample could conceivably be classified as outlier. Yet the (log) pdf of the test sample is higher than most inlier samples by O (d). This indicates that the typical set does not necessarily coincide with regions of high density, and that to detect outliers we should consider checking if the input falls into the former set. We refer to such a test as the typicality test.
However, the typicality test is not directly applicable to general distributions, since it is difficult to generalize the notion of typical set beyond simple cases such as component-wise independent distributions, while maintaining a similar concentration property. 1 One appealing proposal that generalizes this idea is to fit a deep latent variable model (LVM) on the inlier dataset using a factorized prior, so that we can transform the inlier distribution back to the prior and invoke the typicality test in the latent space. This idea has been explored in [3], where the authors conclude that it is not effective. One possible explanation is that for such a test to work, we must accurately identify the LVM, which may be far more difficult than generating visually plausible samples, requiring a significantly larger sample size and/or better models. Overall, the idea of typicality has not yet been successfully applied to single-sample outlier detection for general inlier distributions. 1 While several papers have referred to the typical set for general distributions (e.g. a natural image distribution) which can be defined using the notion of weak typicality [12], we are only aware of concentration results for log-concave distributions [13], or for stationary ergodic processes [12]. Neither setting describes general distributions encountered in many practical applications.
A White Noise Test for Outlier Detection
As we focus on the high-dimensional case, it is natural to take a longitudinal view of data, and interpret a d-dimensional random variable x as a sequence of d random variables. From this perspective, the aforementioned LVM test essentially transforms x to another sequence T (x), so that when x ∼ p in , T (x) is IID. 2 Given a new sample x , the test evaluates whether T (x ) is still IID by checking the value of The statistical power of the test is supported by concentration properties. Of course IID is a strong property characterizing the lack of any dependency structure in a sequence, and transforming a long sequence back to IID may be an unreasonable objective. Thus it is natural to consider alternative sequence mappings designed to achieve a weaker criteria, and then subsequently test for that criteria. In the time series literature, there are two such weaker possibilities: the martingale difference (MD) and white noise (WN). A sequence x is said to be a MD sequence if E(x t |x <t ) = 0 for all t; x is said to be WN if for all s = t, Cov(x t , x s ) = 0, Var(x s ) = 1. It is thus clear that for sequences with zero mean and unit variance, MD is a weaker property than IID, and WN is weaker than MD.
While IID sequences are automatically MD and WN, we can also construct WN or MD sequences from inlier samples using residuals from autoregressive models per the following: a ts x s , where the lower triangular matrix A = (a ts ) is the inverse of the Cholesky factor of Cov x∼pin (x). Assume Var pin (R t ) > 0 for all t. Then when x ∼ p in ,R(x), R(x) are both MD, and R(x), W (x) are both WN.
The first claim above follows from definition. For the second, R is WN because it is MD and has unit variance. Also, W is WN since Cov x∼pin [W t (x)] = I.
The conditional expectations in R can be estimated with deep autoregressive models. For convenience we choose to estimate them with existing autoregressive DGMs in literature (e.g. PixelCNN). However, even though we are fitting generative models, we only need to estimate the mean of the autoregressive distributions {p(x t |x <t )} accurately, as opposed to estimating the entire probability density function. For this reason, tests using R should be more robust against estimation errors than tests based on model likelihood.
As testing for the MD property is difficult, we choose to test the weaker WN property. This can be implemented using the classical Box-Pierce test statistics [14] whereρ l is the l-lag autocorrelation estimate of a test sequence (T t (x)) d t=1 . In practice, we can use either W or R as the test sequence, which are both WN when constructed from inliers. When (T t ) has zero mean and unit variance, we haveρ l = 1 d−l d−l t=1 T t T t+l . We consider a data point x test more likely to be outlier when Q BP (x test ) is larger. Under the context of hypothesis testing where a binary decision (whether x test is an outlier) is needed, we can determine the threshold using the distribution of Q BP evaluated on inlier data.
In high dimensions, formally characterizing the power of a outlier test can be difficult; as illustrated in Section 2.1, it is difficult to even find a proper definition of outlier that is simultaneously practical. Nonetheless, the following remark provides some intuition on the power of our test, when the test sequence derived from outliers has non-zero autocorrelations. This is a natural assumption for image data, where the residual sequence from outlier data could contain more unexplained semantic information, which subsequently contributes to higher autocorrelation; see Appendix A for empirical verification and further discussion on this matter. Remark 2.1 (Connection with the concentration-of-measure phenomenon). The power of the Box-Pierce test is supported by a concentration-of-measure phenomenon: When {T t (x)} is IID Gaussian, 3 Q BP will approximately follow a χ 2 L distribution [14], and Q BP /L will concentrate around 1. On the other hand, if the null hypothesis does not hold and there exists a non-zero ρ l , Q BP /L will be at least dρ 2 l /L, which is much larger than 1 when d is large. It should be noted, however, that our test benefits from the concentration phenomenon in a different way comparing to the typicality test. As an example, consider the following outlier distribution: for x ∼ p ood , (T 1 (x), T 2 (x)) follow the uniform distribution on the circle centered at origin with radius √ 2, and T j (x) = T j−2 (x) for j > 2. Then 1 d d j=1 T 2 j (x) = 1, and thus the typicality test cannot detect such outliers. In contrast, our test will always detect the lag-2 autocorrelation in T , and, as described above, reject the null hypothesis.
Implementation Details
Incorporating prior knowledge for image data: When applied to image data, the power of the proposed test can be improved by incorporating prior knowledge about outlier distributions. Specifically, as the test sequence T (x) is obtained by stacking residuals of natural images, ρ l is likely small for the lags l that do not align with fixed offsets along the two spatial dimensions. As the corresponding finite-sample estimatesρ l are noisy (approximately normal), they constitute a source of independent noise that has a similar scale in both inlier and outlier data, and removing them from (1) will increase the gap between the distributions of the test statistics computed from inlier and outlier data, consequently improving the power of our test. For this reason, we modify (1) to only include lags that correspond to vertical autocorrelations in images. When the data sequence is obtained by stacking an image with channel-last layout (i.e., for x 3(W (i−1)+j)+c refers to the c-th channel of the (i, j) pixel of a H × W RGB image), we will only include lags that are multiples of 3W . For empirical verifications and further discussion on this issue, see Appendix A.
Testing on transformed data: Instead of fitting autoregressive models directly in the input space, we may also fit them on some transformed domain, and use the resulting residual for the WN test. Possible transformations include residuals from VAEs and lower-level latent variables from hierarchical generative models (e.g. VQ-VAE). 4 This can be particularly appealing for the test using (W t ), as linear autoregressive models have limited capacity and cannot effectively remove nonlinear dependencies from data, yet the lack of dependency seems important for the Box-Pierce test, as suggested by Remark 2.1.
Evaluating the White Noise Test
In this section we evaluate the proposed test, with the goal of better understanding the previous findings in [3]. We consider three implementations of our white noise test, which use different sequences to compute the test statistics (1): • the residual sequence R, estimated with autoregressive DGMs (denoted as AR-DGM); • the residual sequence W from a linear AR model, directly fitted on the input space (Linear); • the sequence W constructed from a linear model fitted on the space of VAE residuals (VAE+linear).
Note that both R and W can be viewed as constructed from generative models: for the sequence W , the corresponding model is a simple multivariate normal distribution. Therefore, we can always gain insights from comparing our test to other tests based on the corresponding generative model.
Evaluation on Standard Image Datasets
We first evaluate our white noise test following the setup in [3], where the outlier data comes from standard image datasets, and can be different from inlier data in terms of both low-level details (textures, etc) as well as high-level semantics. In Appendix B we present additional experiments under a similar setup, in which we compare with more baselines.
Evaluation Setup: We use CIFAR-10, CelebA, and TinyImageNet images as inliers, and CIFAR-10, CelebA and SVHN images as outliers. All colored images are resized to 32 × 32 and center cropped when necessary. For deep autoregressive models, we choose PixelSNAIL [15] when the inlier dataset is TinyImageNet, and PixelCNN++ [16] otherwise. We use the pretrained unconditional models from the respective papers when possible; otherwise we train models using the setups from the paper. 5 For the VAE-based tests, we use an architecture similar to [17], and vary the latent dimension n z as it may have an influence on the likelihood-based outlier test. See Appendix C.1 for more details.
We compare our test (WN) with three baselines that have been suggested for generative-model-based outlier detection: a single-sided likelihood test (LH), a two-sided likelihood test (LH-2S), and, for the DGM-related tests, the likelihood-ratio test proposed in [18] (LR). The LH test classifies samples with lower likelihood as outliers. The LH-2S test classifies samples with model likelihood deviated from the inlier median as outliers. It can be viewed as testing if the input falls into the weakly typical set [12]; 6 while there is no concentration guarantee in the case of general inlier distributions, it is natural to include such a baseline. The LR test is a competitive approach to single-sample OOD detection; it conducts a single-sided test using the statistics log p model (x) pgeneric(x) , where p generic refers to the distribution corresponding to some generic image compressor (e.g., PNG). Samples with a lower value of this statistics is considered outlier. The test is based on the assumption that outlier samples with a higher model likelihood may have inherently lower complexity, as measured by log p generic . The test statistics, having the form of a Bayes factor, and can also be viewed that comparing two competing hypotheses (p model and p generic ) without assuming either is true [20]. Furthermore, the fact that we can always construct a principled test statistics out of generative models suggests that these models have in some sense calibrated behavior on such outliers. In other words, under these settings the models do know what they don't know. Our result is to be compared with the recent discovery that EBMs assign lower likelihood to outliers under this setting [5,6], which naturally leads to the question of whether a calibrated DGM should always have a similar behavior. However, our findings are not necessarily inconsistent with theirs, as we explain in Section 4.
Comparison between our test and the LR test is more nuanced, as the latter is also competitive in many cases. Still, the LR test consistently produces a slightly higher average rank, and also has two cases of notable failures.
Finally, note that the simple linear generative model, especially when combined with the WN test, works well in most cases. This challenges the intuition that the inflexibility of a linear model would hamper outlier-detection performance, and has two-fold implications. First, these results indicate that the linear white-noise test could be useful in practice, as it is easy to implement, and does not have unexpected failures like the likelihood tests. Hence, it could be applied as a cheap, first test in a detection pipeline. And secondly, the success of the linear test shows that the current benchmarks leave a lot to be desired, since it implies that the differences between the inlier and outlier distributions being exploited for outlier detection are mostly low-level. Consequently, it remains unclear if these benchmarks are adequate for showcasing tests that are sensitive to semantic differences. Such a semantics-oriented evaluation is arguably more important for downstream applications. Moreover, it better reflects the ability of DGMs to learn high-level semantics from data, as was the intent of [3].
To address this issue, in the following subsection we conduct additional experiments that are more focused on semantics. In this section we evaluate the OOD tests in scenarios where the inlier and outlier distributions have different semantics, but the influence from background or textual differences is minimized. We consider two setups:
Semantics-Oriented Evaluation
• CIFAR, in which we use CIFAR-10 images as inliers and a subset of CIFAR-100 as outliers.
In this setup the inlier and outlier distributions have significantly different semantics, as we have removed from CIFAR-100 all classes that overlap with CIFAR-10, namely, non-insect creatures and vehicles. Furthermore, this setup also reduces textual differences contributed by inconsistent data collection processes; note that both CIFAR datasets have been created from the 80 Million Tiny Images dataset [21].
• Synthetic, in which we further reduce the background and textual differences between image classes by using synthesized images from BigGAN [22]. The outliers are class-conditional samples corresponding to two semantically different ImageNet classes; the inlier distribution is obtained by interpolating between these two classes using the GAN model. In this case, the semantic difference between inlier and outlier distributions is smaller, although in most cases it is still noticeable, as shown in Figure 1. We construct three benchmarks under this setting. Detailed settings and more sample images are postponed to Appendix C.2.
The results are summarized in Table 2, with full results for the synthetic experiments deferred to Appendix C.2. In the CIFAR setup, none of the tests that are based on the AR DGM or the vanilla Gaussian model works well, which is consistent with the common belief that these models cannot capture the high-level semantics. When using VAEs, the WN test works well. This experiment reaffirms that DGMs such as VAEs are able to distinguish between distributions with significantly different semantics, even though they may assign similar likelihood to samples from both distributions.
However, as we move to the synthetic setup where the semantic difference is smaller but still evident, the outcome becomes quite different. The LH test performs much better, and our test no longer consistently outperforms the others. It is also interesting to note that the LR test does not work well on the second synthetic setup (see Appendix C.2), and completely fails to distinguish between inliers and outliers when using an autoregressive DGM. To understand this failure, we plot the distributions of model likelihood and test statistics in Appendix C.2. We can see that the outlier distribution has a slightly higher complexity as measured the generic image compressor, contrary to the assumption in [18] that the lower input complexity of outliers causes the failure of likelihood-based OOD test.
The difference in outcome between these experiments and Section 3.1 demonstrates the difficulty in developing a universally effective OOD test. It is thus possible that in the purely unsupervised setting we have investigated, OOD tests are best developed on a problem-dependent basis. Compared with Section 3.1, we can also see that the previous evaluation setups do not adequately evaluate the ability of each test to measure semantic differences. For this purpose, our approach may be more appropriate. 7
On the Difficulty of Density Estimation in OOD Regions
While DGMs such as GANs, VAEs, autoregressive models, and flow-based models tend to assign higher likelihoods to certain OOD images, high-capacity energy-based models have been shown at times to have the opposite behavior [5,6]. This observation naturally leads to the question of whether calibrated generative models trained on natural image datasets should always assign lower likelihood to such outliers. In this section, we argue that such a question is unlikely to have a clear-cut answer, by showing that given the relatively small sample size of typical image datasets compared to the high dimensionality of data, density estimation on OOD regions is intrinsically difficult, and even models such as EBMs can make mistakes.
Specifically, we train a PixelCNN++ and the high-capacity EBM in [5] on samples generated by a VAE. Since by design we have access to (lower bounds of) the true log probability density of the inlier distribution, we can check if a test model's density estimation in OOD regions is correct, simply by comparing it to the ground truth.
Our ground truth VAE has the same architecture as in Section 3, with n z = 64; training is conducted on CIFAR-10. The DGMs to be tested are trained using 80000 samples from the VAE, under the same setup as in the original papers. See Appendix C.3 for details. We generate outliers by setting half of the latent code in the VAE to zero. Such outliers are likely to have a higher density under the ground truth model, per the reasoning from Section 2.1. Therefore, a DGM that correctly estimates the ground-truth data pdf should also assign higher likelihood to them.
The distributions of density estimates are shown in Figure 2. We can see that while both the EBM and PixelCNN++ models being tested assign a higher relative likelihood to the outliers (note that the absolute likelihoods between different models are not comparable because of different scaling and offset factors), the inlier and outlier density estimates from the EBM overlap significantly (middle plot) as compared to analogous overlap within the ground-truth VAE (left plot). Such behavior may be attributed to the inductive bias of the EBM, which has a stronger influence than data on the estimated pdf in OOD regions given the relatively small sample size.
While we conjecture that VAEs or deep AR models can exhibit similar failures due to a different type of inductive bias, we cannot reverse the above experiment and train these models on EBM samples, as sampling from EBMs rely on ad hoc processes such as premature termination of MCMC chains [5,6,24]. Nonetheless, our experiment has demonstrated the intrinsic difficulty of density estimation in OOD regions under the finite-sample, high-dimensional setting. For this reason, it is difficult to draw a definitive conclusion as to whether real-world outliers should be assigned higher likelihoods, and alternative explanations, such as the typicality argument in Section 2, deserve more attention.
The hardness of density estimation in OOD regions also suggests that OOD tests based on DGM likelihood should be used with caution, as is also suggested by the results in Section 3.1.
Related Work
Several works have explored the use of DGMs in outlier detection under settings similar to [3], some of which also provided possible explanations to the findings in [3]. For example, [11] presents a heuristic test using the Watanebe-Akaike Information Criterion; however, the efficacy of this test remains poorly understood. As another alternative, [25] proposes to compute the likelihood ratio between the inlier model and a background model, based on the intuition that background can be a confounding factor in the likelihood test. In Appendix B we present evaluations for the two tests, showing that they do not always work across all settings. In Section 3 we have introduced the work of [18], and demonstrated that its assumption does not always hold. In summary then, to date there has not been a comprehensive explanation of the peculiar behavior of generative models on semantically different outliers, although previous works can be illuminating and practically useful in certain scenarios.
For the general problem of high-dimensional outlier detection, methods have also been developed under different settings. For example, [19] proposes a typicality test assuming input contains a batch of IID samples, while [4] assumes a few outlier samples are available before testing. There is also work on outlier detection in supervised learning tasks, where auxiliary label information is available; see, e.g. [26][27][28][29][30][31][32].
Finally, it is worth mentioning the formulation of atypicality [33], as motivated by the possible mismatch between the typical set and the high-density regions. The atypicality test considers a test sequence to be OOD when there exists an alternative model leading to a smaller description length [34]. However, their choice to estimate p(x t |x <t ) for test data x becomes problematic when x cannot be viewed as a stationary process, or with a large hypothesis space such as with DGMs.
Discussion
The recent discovery that DGMs may assign higher likelihood to natural image outliers casts into doubt the calibration of such models. In this work, we present a possible explanation based on an OOD test that generalizes the notion of typicality. In evaluations we have found that our test is effective under the previously used benchmarks, and that such peculiar behaviors of model likelihood are not restricted to DGMs. We have also demonstrated that certain DGMs cannot accurately estimate pdfs at OOD locations, even if at times they may correctly differentiate outliers. These findings suggest that it may be premature to judge the merits of a model by its (in)ability to assign lower likelihood to outliers.
Further investigation of the behavior of DGMs on outliers will undoubtedly continue to provide useful insights. However, our analyses suggest a change of practice in such investigations, such as considering alternatives to simply the model likelihood as our proposed test has exemplified. Likewise, the observation that a simple linear test performs well under current evaluation settings also suggests that care should be taken in the design and diversity of benchmark datasets, e.g., inclusion of at least some cases where low-level textures cannot be exclusively relied on.
And finally, from the perspective of unsupervised outlier detection, our experiments also revealed the intrinsic difficulty in designing universally effective tests. It is thus possible that future OOD tests are best developed on a problem-dependent basis, with prior knowledge of potential outlier distributions taken into account. [25] provides an example of such practice.
Broader Impact
This paper explores the nuances of applying DGMs to outlier detection, with the goal of understanding the limitations of current approaches as well as practical workarounds. From the perspective of fundamental research into existing machine learning and data mining techniques, we believe that this contribution realistically has little potential downside. Additionally, given the pernicious role that outliers play in numerous application domains, e.g., fraud, computer intrusion, etc., better preventative measures can certainly play a positive role. That being said, it is of course always possible to envision scenarios whereby an outlier detection system could inadvertently introduce bias that unfairly penalizes a marginalized group, e.g., in processing loan applications. Even so, it is our hope that the analysis herein could more plausibly be applied to exposing and mitigating such algorithmic biases.
We first plot samples of the residual sequence R in Figure 3, 8 under varying choices of inlier and outlier distributions. We can see that R constructed from outlier images generally include a higher proportion of unexplained semantic information: comparing the CelebA residual in Fig.3(a) (second column) where the model is trained on CIFAR-10, to Fig.3(b) (first column) where CelebA is inlier, we can see that the facial structure in CelebA residual is more evident when the model is trained on CIFAR-10. Similarly, comparing the CIFAR-10 residual from both models, we can see that the structure of the vehicle (e.g. front window and car frame) is more evident when the model is trained on CelebA. As the residual sequences constructed from outliers tend to have more natural image-like structures, they will also have stronger spatial autocorrelations, compared with residuals from inlier samples that should in principle be white noise.
Note that while the residual sequences constructed from inliers also contain unexplained semantic information, this is due to estimation error of the deep AR model, and should not happen should we have access to the ground truth model, as we have shown in Section 2.2. Moreover, the estimation error should have a small impact on the efficacy of the white noise test, as it is very easy to learn the correct linear autocorrelation structure of the inlier distribution, and thus the deviation of R from WN is usually small, as we show in Figure 4 right. We now turn to the verification of our prior belief about the autocorrelation structure in T (x test ), when x test comes from the outlier distribution. Specifically, we plot the average ACFs on inlier and outlier data in Figure 4. We can see that the ACF estimates on outlier residuals peaks at lags that are multiples of 96, which corresponds to the vertical spatial autocorrelations in 32 × 32 × 3 images. Moreover, on inlier and outlier distributions, the ACF estimates at other lags have approximately equal variances. When aggregated, these estimates will constitute a noticeable source of noise which reduces the gap between the distributions of inlier and outlier test statistics, and thus excluding them from the statistics will improve the power of the WN test.
Finally, we remark that it is also possible use spatial correlations directly in the construction of test statistics. However, our main focus in this work is to understand previous findings in generative outlier detection (instead of improving the state-of-the-art of OOD tests), and our choice to include only the vertical spatial autocorrelations is good enough for this purpose.
B More Experiments on Standard Image Datasets
In this section we conduct additional experiments, and evaluate a variety of generative outlier detection methods under a common setting. As we will see, while several tests are in general more competitive than others, no single test achieves the best performance across all settings. This experiment strengthens our argument in the main text that unsupervised OOD tests should be developed on a problem-dependent basis.
Evaluation Setup: We use CIFAR-10 as inlier data. For outliers we consider two setups. The first setup is taken from [18], and consists of 9 generic image datasets and 2 synthetic datasets, const and random; see Appendix A in [18] for details. The second setup controls for low-level differences by using the CIFAR-100 subset constructed in Section 3.2. The tests to be evaluated include those considered in Section 3.1, as well as the WAIC test [11] and the background likelihood ratio (BLR) test [25]. We base these tests on two DGMs: the VAE-512 model used in Section 3.1, and a smaller-capacity PixelCNN++ model as in [25]. 9 For the BLR test, a noise level of the background model needs to be determined. Following the recommendations of the authors, we search for the optimal parameter in the range of {0.1, 0.2, 0.3} using the grayscaled CIFAR-10 dataset as outlier. We found the optimal noise level to be 0.1, which is consistent with [25].
Results: Results are shown in Table 3-4. When using VAEs, neither of the newly added baselines are very competitive, suggesting that these methods are more prone to model misspecification.
Notably, the WAIC test does not work with SVHN as outlier. This is also observed in [25,19] using different generative models (autoregressive and flow-based models, respectively). For this reason we drop it in the PixelCNN++ experiment.
When we switch to PixelCNN++, the BLR test performs much better under the setting of [18]. However, in either case it does not work well with the subset-of-CIFAR-100 dataset, despite the dataset's clear semantic difference from the inlier dataset. Such results are not surprising since the difference in background or low-level details is much smaller for CIFAR-100 compared with the other datasets, as we have discussed in Section 3.2. Again, the difference in outcome between the two different settings demonstrates the difficulty of constructing universally effective OOD tests in the unsupervised setup.
C Experiment Details and Additional Results
C.1 Details for Section 3.1 Experiment Setup: For the AR-DGM experiments, we use the pretrained unconditional models from official repositories for CIFAR-10 and TinyImageNet. For CelebA we train a PixelCNN++ model using the authors' setup for unconditional CIFAR-10 generation. Both PixelCNN++ and For the VAE experiments, we use the discretized logistics likelihood as the observation model. The network architecture is adapted from [17]; we vary the capacity of the model by increasing the number of filters in convolutional layers by k times, where k may be in {1, 2, 4, 8}. We train for at most 8 × 10 5 iterations using a learning rate of 10 −4 , and perform early stopping based on the validation ELBO. We choose k to maximize validation ELBO. This leads to k = 1 for CIFAR-10, 4 for CelebA and 8 for TinyImageNet. This step is needed, because when k is further increased, the reconstruction error will start to have different distributions between training and held-out set. Such a difference would be undesirable for all tests, as they will start to find false differences between the inlier training set and the test set. Note that this difference is not due to overfitting, as we have performed early stopping based on validation ELBO; instead, it is simply due to the fact that the model is exposed to training samples and not validation samples, and the gap appears very early in training. We use ELBO to approximate model likelihood in likelihood-related tests. The discrepancy between ELBO and true model likelihood is likely to have little impact on test performance, since we have also experimented with IWAE 100 which led to very similar results.
We compare the distributions of the test statistics evaluated on the inlier test set and outlier test set, and report the AUROC value. We verified that the four tests used in this section do not falsely distinguish between inlier training samples and test samples: the AUROC value for such a comparison is always in the range of (0.42, 0.53). For outlier datasets with more than 50000 test samples, we sub-sample 50000 images for evaluation. Using the formula in [35], we can thus show that the maximum possible 95% confidence interval for the AUROC values is ±0.011. For a description of the four datasets used in this section, please refer to, e.g., Table 3 in [18].
Choice of L and Sensitivity: For our test, we use L = 1200 when computing the Box-Pierce statistics (1). This is because while in principle we should include all lags that are known a priori to be informative, in practice we only have d − l samples to estimateρ l , so the most distant lags can be difficult to estimate. Nonetheless, the impact of L on the test outcome is relatively small: as is shown in Figure 5, using different L does not lead to qualitatively different outcome. We also note that our purpose in the experiments is not to build new state-of-the-art in OOD detection, but is to use the proposed test to validate our explanation to previous findings. Still, if it is desirable to further improve the performance of the test, we can consider tuning L on "validation outlier datasets" that is known a priori to be similar to the outliers that will be encountered in practice, as is done in e.g. [25]. Results for the Normal Likelihood Test on VAE Residuals: In Table 5 we present results for the likelihood tests using a multivariate normal model fitted on VAE residual, denoted with a prefix of "LN". We also consider both single-side and two-side tests. Overall the performance is similar to DGM likelihood, and the single-side likelihood test still manifests catastrophic failures. The CIFAR Experiment: We use the trained models from Section 3.1. We remove from CIFAR-100 the superclasses 1, 2,9,[12][13][14][15][16][17]19,20. For reference, the class names of CIFAR-10 and CIFAR-100 can be found in https://www.cs.toronto.edu/~kriz/cifar.html.
The Synthetic Experiments: We use a pretrained BigGAN model on ImageNet 128 × 128, 10 and down-sample the generated images to 32 × 32. To generate the outliers, recall the BigGAN generator takes as input a noise vector z ∈ R 128 and the one-hot class encoding vector c ∈ R 1000 . Therefore, we interpolate between two classes i and j by setting c k = 0.5 · 1 k∈{i,j} . There are two tunable parameters in our generation process: the truncation parameter σ that determines the truncated normal prior, and a crop parameter τ . Before down-sampling the generated samples, we apply center-cropping to retain a proportion of (1 − 2τ ) 2 pixels, to reduce the amount of details lost in the down-sampling process. The classes and generation parameters used are listed in Table 6; they are hand-picked to ensure the background is similar in inlier and outlier classes. In each setup we generate 200000 samples and use 80% for training.
The VAEs are trained using the same setting as in Section 3.1. For PixelCNN++ we use the hyperparameters of the unconditional CIFAR-10 experiment in the original paper. As the synthetic datasets contain more samples, we train for 80 epochs.
The full AUROC values for the synthetic experiments are shown in Table 7. We plot the distributions of various statistics related to the LR tests using AR-DGM in the second synthetic experiment in Figure 6. We also plot additional inlier and outlier samples in Figure 7. Figure 6: Distribution of various statistics related to the LR test using AR-DGM on the second synthetic experiment.
C.3 Details for Section 4
The ground truth VAE has the same architecture as in Section 3.1, but with a continuous normal likelihood. We use n z = 64. The VAE (log) likelihood is lower bounded by IWAE 200 . For EBM and PixelCNN++, we use the authors' hyperparameters and training setup for the unconditional CIFAR-10 experiments. After training, we verified that the distributions of energy values of training and held-out samples have small differences, so the models do not appear to overfit.
As the OOD test results in [5,6] are obtained with conditional models, we perform the single-sided likelihood test with the unconditional model (trained on the real CIFAR-10 dataset) to check if its behavior on the SVHN dataset is similar to the conditional model. The AUROC value from the single-side likelihood test is 0.529, meaning that the EBM assigns similar or lower likelihood to
|
2020-10-27T01:01:16.955Z
|
2020-10-25T00:00:00.000
|
{
"year": 2020,
"sha1": "d5ed0d819bece4c08c2131bd9b8b4f9c40223ea1",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "d5ed0d819bece4c08c2131bd9b8b4f9c40223ea1",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Mathematics"
]
}
|
247957062
|
pes2o/s2orc
|
v3-fos-license
|
The Risk of Hearing Impairment From Ambient Air Pollution and the Moderating Effect of a Healthy Diet: Findings From the United Kingdom Biobank
The link between hearing impairment and air pollution has not been established, and the moderating effect of a healthy diet has never been investigated before. The purpose of this study was to investigate the association between air pollution and hearing impairment in British adults aged 37–73 years, and whether the association was modified by a healthy diet. We performed a cross-sectional population-based study with 158,811 participants who provided data from United Kingdom Biobank. A multivariate logistic regression model was used to investigate the link between air pollution and hearing impairment. Subgroup and effect modification analyses were carried out according to healthy diet scores, gender, and age. In the fully adjusted model, we found that exposure to PM10, NOX, and NO2 was associated with hearing impairment [PM10: odds ratio (OR) = 1.15, 95% confidence interval (95% CI) 1.02–1.30, P = 0.023; NOX: OR = 1.02, 95% CI 1.00–1.03, P = 0.040; NO2: OR = 1.03, 95% CI 1.01–1.06, P = 0.044], while PM2.5 and PM2.5 absorbance did not show similar associations. We discovered an interactive effect of age and air pollution on hearing impairment, but a healthy diet did not. The findings suggested that exposure to PM10, NOX and NO2 was linked to hearing impairment in British adults, whereas PM2.5 and PM2.5 absorbance did not show similar associations. These may help researchers focus more on the impact of air pollution on hearing impairment and provide a basis for developing effective prevention strategies.
INTRODUCTION
Hearing impairment is one of the most common age-related chronic health problems (Vos et al., 2016). The rate of clinically significant hearing impairment is doubling approximately every decade (Lin et al., 2011;Goman and Lin, 2016). Hearing impairment has been reported to be the second most prevalent disorder and the dominant cause of years lived with disability among global noninfectious diseases (Vos et al., 2016). In contrast with normal hearing adults of the same age, those with hearing impairment have a greater incidence of hospitalization (Genther et al., 2013), death (Contrera et al., 2015), falls (Lin and Ferrucci, 2012), cardiovascular disorders (McKee et al., 2018), depression (Li et al., 2014), and dementia . Consequently, hearing impairment causes a huge burden on the emotional and physical wellbeing of individuals (Dawes et al., 2014b). It is predicted that one-fifth of the population of the United Kingdom will suffer from hearing impairment by 2035 (Taylor et al., 2020). Accordingly, the key is to prevent hearing impairment. Hearing impairment is caused by a combination of hereditary and environmental factors (Cunningham and Tucci, 2017). The identification of modifiable risk factors is critical to provide the basis for preventive strategies.
Global trends in urbanization and industrialization have led to a growing problem of air pollution (Landrigan, 2017), which has become the main public health issue across the world (Brunekreef and Holgate, 2002). Of note, growing evidence demonstrates that air pollution exposure is not only connected with respiratory disorders, such as lung cancer (Xing et al., 2019), but also with cardiovascular diseases (Lelieveld et al., 2019;Hayes et al., 2020), inflammatory diseases (Chang et al., 2016), diabetes (Strak et al., 2017), and neurodegenerative diseases (Chen et al., 2017). Besides, the main environmental risk factor for human death is air pollution (Gordon et al., 2014). Lately, there have been reports that air pollution may impact hearing health, but available data is limited. A recent study found that participants exposed to fine particulate matter (PM 2.5 : particulate matter ≤ 2.5 µm in diameter) and nitrogen dioxide (NO 2 ) had a substantially increased risk of sudden sensorineural hearing loss (SSNHL). Another study (Chang et al., 2020) showed that increased concentrations of NO 2 were linked to a higher risk of sensorineural hearing loss, while in a nested case-control study (Choi et al., 2019), SSNHL was associated with NO 2 exposure, but particulate matter with a diameter of 10 µm or less (PM 10 ) was not associated with SSNHL. Similarly, another study (Lee et al., 2019) also found no association between PM 10 and number of SSNHL patient. Although these studies explored the association of air pollution with sensorineural hearing loss, the results remained controversial.
A healthy diet might preserve hearing (Spankovich and Le Prell, 2013;Curhan et al., 2018Curhan et al., , 2020, as described by their role in preventing chronic illnesses (Yevenes-Briones et al., 2021). A healthy diet includes multiple components that support antioxidant function and protect against free radical damage (Curhan et al., 2020), thereby regulating oxidative stress and delaying mitochondrial dysfunction (Yevenes-Briones et al., 2021). In addition, a healthy diet might be beneficial to hearing impairment by protecting microvascular and macrovascular damage to cochlear blood flow (Appel et al., 2006;Fung et al., 2008), providing the essential nutrients for an adequate cochlear blood supply (Yevenes-Briones et al., 2021), and reducing inflammation (Neale et al., 2016). According to previous research, dietary patterns could modify the relationship between air pollution and health-related outcomes, such as cardiovascular disease mortality risk (Lim et al., 2019) and cognitive function (Zhu et al., 2022). However, the moderating effect of a healthy diet on the link between hearing impairment and air pollution has not been investigated before. Therefore, in this cross-sectional study, we aimed to explore the link between air pollution and hearing impairment and to analyze whether a healthy diet has moderating effects on this link.
Study Subjects
The United Kingdom Biobank is an international and accessible data resource 1 containing data on more than half a million people aged from 37 to 73 years (99.5% were between 40 and 69 years) in England, Scotland, and Wales (Collins, 2012). Adults living within a 25-mile radius of one of 22 Biobank Assessment Centers in the United Kingdom were invited by email to join the United Kingdom Biobank between 2006 and 2010, achieving a response rate of approximately 5.5% (Sudlow et al., 2015). Participants completed a computer touch screen questionnaire (which included questions on topics such as population, health, lifestyle, environment as well as medical history, etc.) and underwent physical measurements, including a hearing test. Written informed consent was signed by all the participants. The research was carried out with the general approval of the National Health Service and the National Research Ethics Service.
The subjects of the current study were all those participants for whom data on both air pollution measures and hearing test results were available.
Hearing Test
The speech-in-noise hearing test (i.e., digit triplet test, DTT) of the United Kingdom Biobank provided participants with 15 groups of English monosyllabic numbers to evaluate the listening thresholds (i.e., signal-to-noise ratio) at different sound levels. 2 Each ear was examined separately, in the order that the participants were allocated at random. Participants first wore circumaural headphones and selected the most comfortable volume. Then, they started the speech-in-noise hearing test to identify and type the three numbers they had heard by touching the screen interface. The noise level of the subsequent triple would increase if the triplet was correctly recognized; otherwise, it would reduce. The speech reception threshold (SRT) was defined as the signal-to-noise ratio of correctly understanding half of the presented speech. The SRT ranged from −12 to +8 dB, with a lower score representing better performance. Based on the cutoff point established by Dawes et al. (2014b), the better performance ear was chosen for this study, and participants were divided into normal (SRT < −5.5 dB) and hearing impairment (SRT ≥ −5.5 dB) groups.
The DTT shows a very good correlation with the pure tone hearing test (r = 0.77) (Jansen et al., 2010), so it can be considered as a measure of hearing impairment (Dawes et al., 2014b). There are some advantages to the DTT, for example, there is no need for a sound booth and the test can be delivered via the internet (Moore et al., 2014). The most common hearing complaint is difficulty in hearing over background noise (Pienkowski, 2017), so the speech-in-noise hearing test used to evaluate hearing function represents an ecologically effective as well as objective hearing indicator (Couth et al., 2019).
Measures of Air Pollution
The air pollution data recorded in the United Kingdom Biobank were from the Small Area Health Statistics Unit, 3 a part of the BioShaRE-EU Environmental Determinants of Health Project. 4 The Land Use Regression model was applied to assess air pollution in 2010 by modeling at each residential address of the participants, which was developed as part of the European Study of Cohorts for Air Pollution Effects. 5 The Land Use Regression model used to calculate the spatial distribution of air pollutants was based on geographic predictors such as traffic, land use, and topography in the geographical information system. In this study, the air pollutants assessed were PM 2.5 , PM 10 , PM 2.5 absorbance, NO X , and NO 2 , of which all were annual average concentrations in µg/m 3 . More details about the air pollution data used in the United Kingdom Biobank are available elsewhere. 6
Assessment of Other Variables
Age, gender, ethnicity, educational background, employment, smoking status, and alcohol intake were utilized as baseline data. The ethnic background of participants was divided into six categories: White, Black, Asian, Chinese, Mixed, and other. The educational background was divided into six categories: higher national diploma (HND), national vocational qualification (NVQ), higher national certificate (HNC), or equivalent; A levels or AS levels (including the higher school certificate), or equivalent; O levels (including the school certificate), general certificate of secondary educations (GCSEs), or equivalent; certificate of secondary educations (CSEs), or equivalent; college or university degree; and other professional qualification. Employment status was divided into seven categories: retired; unable to work because of sickness or disability; looking after home and/or family; unemployed; in paid employment or selfemployed; student (full-time or part-time); or doing unpaid or voluntary work. Smoking status (Dawes et al., 2014a) was divided into three categories: never-smokers, current and former smokers. Alcohol consumption frequency was divided into five categories: daily or almost daily; three or four times a week; once or twice a week; occasional drinking; and never. Body mass index (BMI) was categorized as obese (BMI ≥ 30), overweight (25 ≤ BMI < 30), normal weight (18.5 ≤ BMI < 25), and underweight (BMI < 18.5). Evaluation of physical activity was conducted through the questions in the International Physical Activity Questionnaire, which graded activity into three degrees: low, moderate, and high. 7 A questionnaire 8 containing the usual dietary intake was completed by United Kingdom Biobank participants during the baseline assessment. The intake of fruits (fresh fruit intake and dried fruit intake), vegetables (cooked vegetable intake and salad/raw vegetable intake), fish (oily fish intake and non-oily fish intake), processed meat and unprocessed red meat (beef intake, lamb/mutton intake, and pork intake) from the United Kingdom Biobank food intake questionnaire was used to calculate the health diet scores (Wang et al., 2021): fruit intake ≥ three pieces per day, vegetable intake ≥ four tablespoons per day, fish intake ≥ twice per week, processed meat intake ≤ twice per week, unprocessed red meat intake ≤ twice per week. Each favorable dietary factor gave a point, so the healthy diet scores were 0-5. The serum concentrations of glycosylated hemoglobin and total cholesterol were regarded as continuous variables. Vascular problems included angina, heart attack, stroke, and high blood pressure.
Data Analysis
All analyses were performed using R version 4.0.2. The data are summarized descriptively. Continuous variables are represented as mean (standard deviation) and comparison between the two groups was performed by independent sample t test. The classification variables are represented as percentages (%) and the rate was compared by χ 2 test. The link between air pollution and hearing impairment was investigated using a multivariate logistic regression model with and without adjusting for other variables. Model 1 was unadjusted, Model 2 was adjusted for age and gender, and Model 3 was further adjusted for race, educational level, employment, smoking status and alcohol consumption frequency, BMI, physical activity, glycosylated hemoglobin, total cholesterol, and vascular diseases (heart attack, stroke, angina, and hypertension). Moreover, we evaluated the association between subgroups stratified by healthy diet scores (low: 0-2, and high: 3-5), gender (female and male) and age (≤50, 51-60, and >60). The Wald test was used to test interactions among subgroups. P < 0.05 (two-sided test) was considered statistically significant.
RESULTS
In total, 158,811 subjects were enrolled in this study, including 18,881 (11.9%) with hearing impairment and 139,930 (88.1%) with normal hearing, 54.5% were female (n = 86,516), 91.7% were white (n = 145,633), with the mean (standard deviation) age of 56.68 (8.15) years. The distribution of baseline characteristics and air pollution in the two groups is shown in Table 1. Except for physical activity, other variables were significantly distributed in the two groups (P < 0.05). In comparison to the group of people with normal hearing, the subjects in the hearing impairment group were older on average, non-whites. In addition, they were more likely to be obese and to have cardiovascular problems. Furthermore, the hearing impairment group was exposed to higher mean annual concentrations of air pollutants than the normal hearing group (Table 1). Table 2 shows the risks of several air pollutants and hearing impairment. Model 1 (without adjustment for any confounders) showed significant associations between air pollutants and hearing impairment (P < 0.001) [PM 2.5 : odds ratio (OR) = 2.03, 95% confidence interval (95% CI) 1.73-2.40; PM 10 : OR = 1.64, 95% CI 1.51-1.78; PM 2.5 absorbance: OR = 1.48, 95% CI 1.40-1.56; NO X : OR = 1.06, 95% CI 1.05-1.07; NO 2 : OR = 1.17, . Except for PM 2.5 and PM 2.5 absorbance, which showed no significant associations with hearing impairment (P = 0.970 and P = 0.063, respectively), we observed that the associations between the other pollutants and hearing impairment remained in Model 3 after further adjusting for other confounders on the basis of Model 2 (PM 10 : OR = 1.15, 95% CI 1.02-1.30, P = 0.023; NOx: OR = 1.02, 95% CI 1.00-1.03, P = 0.040; NO 2 : OR = 1.03, 95% CI 1.01-1.06, P = 0.044), even though the estimates were lower than those in Models 1 and 2. Table 3 shows the associations between several air pollutants and hearing impairment, stratified by healthy diet scores. In this study, no significant associations and moderating effects were observed. After stratification by age (Table 4), we found that PM 10 , PM 2.5 absorbance, NO X , and NO 2 were associated with hearing impairment in participants up to and including 50 years of age (PM 10 : OR = 1.62, 95% CI 1.20-2.18, P = 0.002; PM 2.5 absorbance: OR = 1.32, 95% CI 1.08-1.61, P = 0.006; NO X : OR = 1.04, 95% CI 1.01-1.08, P = 0.014; NO 2 : OR = 1.09, 95% CI 1.01-1.17, P = 0.031). In participants aged 51 to 60 years and above 60, there was no connection between air pollution and hearing impairment. Additionally, there was a statistically significant interaction between age and air pollution with hearing impairment (P < 0.05). Further, after stratifying by gender (Table 5), we found that NO X and NO 2 were correlated with hearing impairment in men.
DISCUSSION
In this cross-sectional study, we investigated the association between hearing impairment and air pollution (comprising PM 2.5 , PM 10 , PM 2.5 absorbance, NO X , and NO 2 ) using United Kingdom Biobank data. We found that exposure to PM 10 , NO X , and NO 2 was linked to hearing impairment after adjusting for confounding factors, while PM 2.5 and PM 2.5 absorbance showed no similar correlations. Furthermore, there was no modification of these associations by a healthy diet. Regarding age, interaction effects were observed.
The relationship between air pollution and hearing impairment has not been fully established yet. Several studies indicated that exposure to NO 2 could be related to hearing problems. Chang et al. (2020) found that people exposed to moderate (hazard ratio, HR = 1.40, 95% CI 1.27-1.54) and high levels of NO 2 (HR = 1.63, 95% CI 1.48-1.81) were at higher risk of developing sensorineural hearing loss than those exposed to the low level. The results of Tsai et al. (2020) were similar, finding a significantly increased risk of SSNHL in those exposed to high concentrations of NO 2 (adjusted HR = 1.02, 95% CI 1.01-1.04). Likewise, Choi et al. (2019) discovered that SSNHL was associated with short-term exposure to NO 2 (14 days) (adjusted OR = 3.12, 95% CI 2. 16-4.49). Consistent with previous studies, NO 2 was associated with hearing impairment Abbreviations: OR, odds ratio; CI, confidence interval; PM, particulate matter; NO 2 , nitrogen dioxides; NO X , nitrogen oxides. All models were adjusted for age, gender, race, education, employment, smoking, drink frequency, body mass index, physical activity, glycosylated hemoglobin, total cholesterol, and vascular disease (heart attack, stroke, angina, and hypertension). This subgroup included 152,738 participants because of the missing data of dietary information for 6,073 participants. Abbreviations: OR, odds ratio; CI, confidence interval; PM, particulate matter; NO 2 , nitrogen dioxides; NO X , nitrogen oxides. All models were adjusted for gender, race, education, employment, smoking, drink frequency, body mass index, physical activity, glycosylated hemoglobin, total cholesterol, and vascular disease (heart attack, stroke, angina, and hypertension). Abbreviations: OR, odds ratio; CI, confidence interval; PM, particulate matter; NO 2 , nitrogen dioxides; NO X , nitrogen oxides. All models were adjusted for age, race, education, employment, smoking, drink frequency, body mass index, physical activity, glycosylated hemoglobin, total cholesterol, and vascular disease (heart attack, stroke, angina, and hypertension).
in our study. Moreover, NO X , a term that contains several nitrogen compounds but is mainly composed of nitrogen oxide and NO 2 , showed an association with hearing impairment. In contrast to our expectations, we found a significant association between PM 10 and hearing impairment but not PM 2.5 . Conversely, previous studies (Choi et al., 2019;Lee et al., 2019) showed no correlation between PM 10 and hearing impairment. A study reported a significantly higher risk of developing SSNHL with moderate (adjusted HR = 1.58, 95% CI 1.21-2.06) or high (adjusted HR = 1.32, 95% CI 1.00-1.74) level exposure to PM 2.5 compared to those exposed to the low level. And another study discovered a slight negative association between the maximum PM 2.5 concentration and the admission rate of SSNHL (Lee et al., 2019). In 2017, a study (Strak et al., 2017) in a large national health survey reported that oxidative potential of PM 2.5 rather than PM 2.5 , was associated with diabetes prevalence, indicating that the impact of particulate matter on diabetes might vary with the compositions. According to a study (Yin and Harrison, 2008) conducted at three sites (urban roadside, central urban background, and rural) in Birmingham, United Kingdom, organics, nitrate, and sulfate accounted for a substantial amount of the overall mass for both PM 10 and PM 2.5 . This research also showed that proportions of these three major parts and other secondary compositions like iron-rich dust and sodium chloride varied in both. Although discrepancies in associations with diseases after PM 2.5 and PM 10 exposure could be explained by different compositions of particulate matter, the evidence may still be limited. More research is required to clarify this issue in the future.
Oxidative stress and mitochondrial dysfunction play a crucial role in hearing impairment (Yamasoba et al., 2013). Air pollution might be involved in oxidative stress by producing or directly acting as reactive oxygen species (Kelly, 2003), which can then induce mitochondrial damage (Rodríguez-Martínez et al., 2013). Dysfunctional mitochondria increase reactive oxygen species generation and accumulation, reducing the mitochondrial membrane potential, activating the apoptosis pathway, and causing the death of inner ear hair cells (Park et al., 2016). What's more, air pollution might indirectly be associated with hearing impairment by causing cardiovascular diseases through pro-inflammatory pathways and the production of reactive oxygen species (Simkhovich et al., 2008;Brook et al., 2010). It has been demonstrated that cardiovascular diseases are risk factors for hearing impairment (Oron et al., 2014;Tan et al., 2018). Nonetheless, the link between air pollution and hearing impairment was still evident after adjusting for related vascular problems in Model 3, suggesting that other mechanisms may also be involved in the link between air pollution and hearing impairment.
There was evidence that a healthy diet could protect against hearing impairment by reducing vascular damage, decreasing inflammation, and inhibiting oxidative damage (Curhan et al., 2020;Yevenes-Briones et al., 2021). Based on similar mechanistic pathways, modifying the health effects of air pollution by diet may be possible. But in our study, no effect modification of diet was observed. Studies previously showed an interaction between dietary patterns and air pollution exposure on health-related outcomes. In a birth cohort in Northeast China, animal foods pattern was found to significantly modify the association between exposure to NO 2 and carbon monoxide and gestational diabetes mellitus, with higher intake related to a higher rate of gestational diabetes mellitus following exposure to air pollution (Hehua et al., 2021). A Mediterranean diet reduced cardiovascular disease mortality risk related to long-term exposure to air pollutants in a large prospective US cohort (Lim et al., 2019). A prospective cohort study of Chinese older adults reported that a plant-based dietary pattern mitigated the adverse effects of air pollution on cognitive function (Zhu et al., 2022).
It seems to be accepted that hearing impairment becomes more common with increasing age (Díaz et al., 2016). Nevertheless, the association between air pollution and hearing impairment was only found in participants younger under or equal to 50 years of age in this study. An interaction effect between age and air pollution on hearing impairment was also observed. Age is an unmodifiable risk factor for hearing impairment, which could lead to cochlear aging (Yamasoba et al., 2013). However, modifiable risk factors play a significant part in the development of hearing impairment at a relatively young age (i.e., <85 years old), while their effects decrease in the oldest people (i.e., ≥85 years old) (Zhan et al., 2010). Therefore, we speculated that air pollution, a modifiable risk factor, might have a greater impact on people younger than or equal to 50 years old compared to those over 50 years old, even if our study subjects were all under 85 years old.
Our research used data from the United Kingdom Biobank, a national cohort with good quality control. Additionally, the hearing test was based on the DTT data in the United Kingdom Biobank, which represented an ecologically effective and objective hearing indicator. We also adjusted for many confounders (including demographic information, lifestyle, and related diseases affecting hearing) to reduce their potential impact. However, our research also had some limitations. Above all, the cross-sectional design of this study was inadequate to account for the cause and effect between air pollution and hearing impairment, and further longitudinal studies are needed. Second, the sample of participants in United Kingdom Biobank was suggested to be unrepresentative of the general population because of the bias toward recruiting participants who were generally healthier and had a higher socioeconomic status (Fry et al., 2017). Hence, the subsample from United Kingdom Biobank and estimated hearing impairment rate in this study might not be representative of the general population. Third, like other epidemiological studies of air pollution, there might be potential misclassifications of air pollution exposure in this study because air pollution exposure was evaluated at the place of residence. Fourth, in the United Kingdom, where emissions regulations are strict and average pollution level is relatively low, it is not clear to what extent this study can be generalizable to other settings. Finally, in spite of adjusting for many confounders in our study, the potential effects of residual confounds of unmeasured variables could not be excluded, such as the use of ototoxic drugs, which was not considered due to lack of data.
CONCLUSION
In conclusion, we found that exposure to PM 10 , NO X , and NO 2 was associated with hearing impairment in British adults, while PM 2.5 and PM 2.5 absorbance did not show similar correlations. Our findings may help researchers pay more attention to the impact of air pollution on hearing impairment and provide a basis for developing effective prevention strategies.
DATA AVAILABILITY STATEMENT
The data supporting the results of this study can be found in the website of UK Biobank (www.ukbiobank.ac.uk) upon application.
ETHICS STATEMENT
The study involving human participants was carried out with the ethical approval obtained by United Kingdom Biobank from the National Health Service National Research Ethics Service.
AUTHOR CONTRIBUTIONS
YS, YT, and LY conceived the overall project and developed the methods as well as procedures throughout the study. DL and LY managed the data collection and data entry and carried out data verification and statistical analyses. LY drafted the first version of the manuscript. All authors oversaw statistical analysis, involved in the interpretation of the results, reviewed, and approved the final manuscript.
FUNDING
This work was supported by the National Natural Science Foundation of China (Grant Number: 82071058).
|
2022-04-06T13:47:14.067Z
|
2022-04-06T00:00:00.000
|
{
"year": 2022,
"sha1": "50ab28a26e79e4e282ef6d7dfba79d64a8d2130c",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "50ab28a26e79e4e282ef6d7dfba79d64a8d2130c",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
219687327
|
pes2o/s2orc
|
v3-fos-license
|
Double-valued strong-coupling corrections to Bardeen-Cooper-Schrieffer ratios
Experimental discovery of near-room-temperature (NRT) superconductivity in highly-compressed H$_3$S, LaH$_1$$_0$ and YH$_6$ restores fundamental interest to electron-phonon pairing mechanism in superconductors. One of prerequisites of phonon-mediated NRT superconductivity in highly-compressed hydrides is strong electron-phonon interaction, which can be quantified by dimensionless ratios of Bardeen-Cooper-Schrieffer (BCS) theory vs $k{_B}T{_c}$/(${\hslash}$${\omega}{_l}{_n}$), where Tc is the critical temperature and ${\omega}{_l}{_n}$ is the logarithmic phonon frequency (Mitrovic et al. 1984 Phys. Rev. B 29 184). However, all known strong-coupling correction functions for BCS ratios are applicable for $k{_B}T{_c}$/(${\hslash}$${\omega}{_l}{_n}$)<0.20, which is not high enough $k{_B}T{_c}$/(${\hslash}$${\omega}{_l}{_n}$) range for NRT superconductors, because the latter exhibit variable values of 0.13<$k{_B}T{_c}$/(${\hslash}$${\omega}{_l}{_n}$)<0.32. In this paper, we reanalyze full experimental dataset (including data for highly-compressed H3S) and find that strong-coupling correction functions for the gap-to-critical-temperature ratio and for the specific-heat-jump ratio are double-valued nearly-linear functions of $k{_B}T{_c}$/(${\hslash}$${\omega}{_l}{_n}$).
Mitrovic et al [18] proposed to use ⋅ ℏ⋅ as a primary variable in the strong-coupling correction function for gap-to-critical-temperature ratio, Later, Marsiglio and Carbotte [19], and Carbotte [20] extended this proposal and used ⋅ ℏ⋅ as a variable in strong-coupling correction functions for other dimensionless ratios of Bardeen-Cooper-Schrieffer (BCS) theory [21]. All proposed correction functions [18][19][20] have general form: where A,B, and C are fitting constants, (0) is the amplitude of the ground state energy gap, Δ ( ) is specific heat jump at Tc, is Sommerfeld constant, 0 is the permeability of free space, and = 0 2⋅√2⋅ ⋅ 1 ⋅ is thermodynamic critical field, where 0 is flux quantum, is London penetration depth, and is the coherence length.
II. Description of the problem
It should be noted that Eq. 3 is based on non-linear fitting term proposed by Geilikman and Kresin [23]: where a and b are free fitting parameters, and = ⋅ ℏ⋅ 0 , where 0 is some characteristic frequency of full phonon spectrum, for which Mitrovic et al [18] proposed to use ln (Eq. 1).
It can be seen that Eq. 5 and, as a consequence, Eq. 3 have one hidden parameter, which is power law exponent 2, and general formula for Eq. 5 is: where, a, b, and c are free fitting parameters, while Eq. 3 should be expressed: where A, B, C, and D are free-fitting parameters. However, Eq. 7 even in its reduced form, i.e. Eq. 3, cannot have completely independent parameters, because of its complexity and strong non-linearity.
Despite mentioned above consensus about strong-coupling nature of electron-phonon interaction in NRT superconductors, analyses of experimental self-field critical current, Jc(sf,T), data [24] and the upper critical field, Bc2(T), data [25] in highly-compressed H3S showed that: (13) reported by Douglass and Meservey [29] in 1964. In this regard, the value of 2⋅Δ(0) ⋅ = 3.20 ± 0.03 [24] has been deduced for a single dataset (available at that time and this is still the only Jc(sf,T) dataset available to date) which has six data points and most of these data points were measured at high reduced temperatures. However, the analysis [24] unavoidably showed that H3S is definitely not strong-coupling superconductor, because Jc(sf,T) data fit at fixed ratio of Despite a fact that Eq. 3 is in a wide used to study superconductors ranging from elements [20,30] to NRT superconductors [4,[7][8][9], it should be stressed that Mitrovic et al. [18], Marsiglio and Carbotte [19], Carbotee [20], and Nicol and Carbotee [7] in their analyses excluded data points with ratios of ⋅ ℏ⋅ > 0.20. However, these data points are presented in (14) for which respectful value deduced from extrapolative curve proposed by Mitrovic et al. [18] is: However, an extrapolation of Eq. 3 in the region of ⋅ ℏ⋅ > 0.20 (with A, B, and C parameters reported by Mitrovic et al [18]) cannot be accurate, because data for The cyan line is Eq. 3 proposed by Mitrovic et al [18]. Blue points are the data used by Mitrovic et al [18]. Data points in red circle were excluded from the consideration in Refs. [7,[18][19][20].
Taking in account that recently Kruglov et al. [8] calculate that 3 ̅ phase of LaH10 at there is a need to reconsider strong-coupling correction functions for BCS ratios for the case of ⋅ ℏ⋅ > 0.20. This analysis is presented herein.
There is a need to make a clarity, that an approach to restrict total databased before the analysis has general designation as survivorship bias [31,32], when a portion of full experimental database is excluded from the consideration by applying some hidden or clearly stated criterion (in given case, the hidden rule was [18] in their derivation of the equation [18]: In our fits, data points exhibited high Table I and some fits are shown in It can be seen (Table 1 and Figure 2) that Eq. 14 does not provide a good quality for the dataset (R = 0.965), and what is also important, that our fits to Eq. 3 reveal completely different A, B, and C parameters values (Table 1 and Fig. 2):
Equation for the gap-to-critical-temperature ratio
The large difference between Eq. 18 and Eq. 19 reflects a simple fact that restricted experimental dataset (blue data points in Figs. 1,2) is practically a linear function. By employing strongly non-linear fitting function, i.e. Eq. 3 [7,[18][19][20], to fit nearly linear dependent dataset causes a problem which is known as an overfitting problem. Truly, it can be seen in Table 1, that for the case of three free fitting parameters, there are large mutual parameters dependences. General solution for overfitting problem is to find a simple function with minimal number of free fitting parameters which fits the data with similar quality.
By considering several possible options, we find that the restricted dataset can to be fitted to simple function of: Fitting results are summarized in Table 2 and showed in Fig. 3. It can be seen (Table 2, Fig. 3) that free-fitting parameters of A and B are close to each other and simultaneously close to weak-coupling limit of 3.53. Based on this, further reduction in number of parameters has be done by applying the condition of A = B: The fit quality does not change (R = 0.992) by applying this condition and, in addition, we observed substantial drop in mutual parameters dependence to 0.938.
Based on a fact that free-fitting parameter A = B = 3.52 ± 0.01 is remarkably close to the (Fig. 4), where we use which was deduced in our previous work [25] from temperature dependent Bc2(T) data reported by Mozafari et al. [33]. It can be seen that H3S (red data point in Fig. 4) is located well below a trendline where the majority of superconductors are located.
However, it can be also seen in Fig. 4, that H3S and superconductors with high Fitting result to Eq. 23 is shown in Fig. 4, where free-fitting parameter = 2.87 ± 0.06.
It is important to note, that 95% confidence band for this fit covers all data in the dataset.
Based on this, we can conclude that strong-correction function for the gap-to-criticaltemperature ratio is double-valued function. The existence of the second branch explains the contradiction between the first-principles calculations, which showed that H3S has the ratio of [24,25] which showed that H3S is weakcoupling superconductor.
Exact formula for
It can be seen (Fig. 5) that even restricted Δ ( ) ⋅ vs ⋅ ℏ⋅ dataset (blue data points in Fig. 5) has a large scattering, especially for the range of variable within: An application of non-linear fitting function (Eq. 3): with three free-fitting parameters A, B, and C for such scattered dataset cannot be proved to be valid, because the goodness of fit will be always low (for instance, R = 0.940 for the fit to
Specific-heat-jump ratio Bi
Eq. 25), and it cannot be significantly improved because of raw data scattering. Thus, there is a task to find much simpler function which can fit data with similar or better quality.
In Fig. 6(a) we show calculated curve for Eq. 3 for full range of ⋅ ℏ⋅ , where we used A = 1.43, B = 52, and C = 3 reported by Marsiglio and Carbotte [19]. The curve behaves much better in comparison with its counterpart (i.e., Eq. 18), because it goes down for ⋅ ℏ⋅ > 0.20 towards to reach experimental data on the high end of ⋅ ℏ⋅ values. Nevertheless, our fit of the restricted dataset to Eq. 25 ( Fig. 6(b)) reveals that the equation is (Table 3): which has (similarly to our previous finding in regard of Eq. 18) remarkably different parameters in comparison with ones reported by Marsiglio and Carbotte [19]. More details can be found in Table 3. It can be seen in Fig. 6, that 95% confidence band becomes very wide for can be splatted in two nearly linear-dependent branches. Thus, we fit the restricted dataset indicated by blue data points in Fig. 7 (the upper branch) to linear equation: to be consistent with designations of parameters, which we already used in Eq. 20-22.
Deduced equation for the upper branch is: Details can be found in Fig. 7 where = 0.74 ± 0.03, details can be found in Fig. 7. [20]. However, as this showed in Fig. 57 [20] and acknowledged in the text [20], that four superconductors with large We fit the restricted dataset to simple linear function: 30, but because of small number of data points, 95% confidence band is reasonably large.
Double-valued
The width of the confidence band has been reduced (Fig. 8), by fitting the dataset to function: Deduced parameter is = 1.31 ± 0.01. This is important to note that all four superconductors with large ⋅ ℏ⋅ values and H3S fall into this more narrow 95% confidence band (Fig. 8).
IV. Conclusions
In this paper we analysed data for the gap-to-critical-temperature ratio,
|
2020-06-16T01:00:55.295Z
|
2020-06-15T00:00:00.000
|
{
"year": 2020,
"sha1": "4525790cb3f8940c941980c96e434ffaf10a7c16",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/2006.08390",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "4525790cb3f8940c941980c96e434ffaf10a7c16",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Materials Science"
]
}
|
252322287
|
pes2o/s2orc
|
v3-fos-license
|
A mixed-methods examination of public attitudes toward vascularized composite allograft donation and transplantation
Background: This mixed-methods study examined the general public’s knowledge and attitudes about vascularized composite allografts. The availability of these anatomical gifts to treat individuals with severe disfiguring injuries relies largely on decisions made by family members. If vascularized composite allograft transplantation is to become more readily available, the knowledge and beliefs of the general public must be explored to ensure vascularized composite allograft donation approaches adequately support the donation decision-making process. Methods: We conducted six focus groups with 53 members of the general public, which were audio-recorded for accuracy and transcribed. Before each session, participants completed a brief survey assessing donation-related knowledge, attitudes, and beliefs. Analysis of qualitative data entailed the constant comparison method in the development and application of a schema for thematic coding. Descriptive statistics and Spearman’s rank coefficient were used in the analysis of the quantitative data. Results: Respondents were most knowledgeable about solid organ donation and least knowledgeable about vascularized composite allograft donation. Six major themes emerged: (1) strong initial reactions toward vascularized composite allografts, (2) limited knowledge of and reservations about vascularized composite allografts, (3) risk versus reward in receiving a vascularized composite allograft, (4) information needed to authorize vascularized composite allograft donation, (5) attitudes toward donation, and (6) mistrust of the organ donation system. Conclusion: The general public has low levels of knowledge and high levels of hesitation about vascularized composite allograft donation and transplantation. Education campaigns to familiarize the general public with vascularized composite allografts and specialized training for donation professionals to support informed family decision-making about vascularized composite allograft donation may address these issues.
Introduction
Vascularized composite allografts (VCAs) are defined as human tissue recovered for transplantation and as anatomical units containing multiple tissue types, requiring blood flow from surgical connections of blood vessels. 1 VCAs include the face, hands, arms, uterus, and penis, among others, and may offer an alternative treatment option for those who do not respond well to currently available reconstructive procedures or prosthetics. [2][3][4][5][6] Twenty-nine transplant centers currently maintain vascularized composite allotransplantation programs in the United States. 7 To date, 112 VCA procedures have been completed nationwide, and 22 candidates were awaiting VCA transplantation. 8,9 Although currently treated as solid organs by United Network for Organ Sharing (UNOS), 10 VCAs are not included with first-person donor pre-designations via driver's license notations, donor cards, or online registries. 11 As with other anatomical gifts, donation professionals from regional Organ Procurement Organizations (OPOs) obtain authorization of VCAs from the next-of-kin (i.e. family decision makers (FDMs)) of potential donors. While there is some indication that VCA donation is usually raised with potential donor families after they have authorized solid organ and tissue donation, VCA donation requests are neither common nor uniform. 11,12 Understanding how the typical layperson relates to and weighs the VCA donation opportunity against the donation of other anatomical gifts is critical to advancing the field.
In addition to its novelty and the relative infrequency of requests for VCA donation, only a modest body of literature exists describing the public's knowledge, acceptance of, and willingness to donate VCAs. A survey of urban emergency department patients, for example, found lower willingness to donate a VCA (54.6% hand and 44.0% face) than a solid organ, such as a kidney (77.5%). 13 Respondents supported face donation if recipients' injuries were sustained through military service or no-fault accidents. 13 A survey of the general public in 2016 (N = 1485) found that two-thirds of respondents were willing to donate their own face, legs, and hands/forearms, with a majority also expressing willingness to donate the penis and uterus. 14 Similar levels of support for VCA donation were found among veterans 15 and the general public when asked about a VCA donation from a deceased family member whose donation wishes were known. 14 A recent Gallup poll found strong support for solid organ donation (90.4%) but drastically lower support for VCA donation; 64% of respondents were willing to donate their own hands and 46.9% were willing to donate their own face after death. 16 Respondents were slightly less willing to donate a loved one's hand (58.6%) or face (43.6%). 16 Reasons for resistance to VCA donation included psychological discomfort with the idea of VCAs, desire to remain whole after death, and the possibility of identifying a VCA on another person. 13,14 A more nuanced understanding of the general public's motivations and rationale behind reactions to VCA donation are needed if VCA transplantation is to become a more readily available treatment for individuals with severe disfiguring injuries. The informational needs of the general public must also be explored to ensure fully informed VCA donation decisions. This study was designed to investigate VCArelated beliefs, attitudes, and behaviors within the general population. The goal was to identify important themes to inform both public education about VCAs and OPO professionals' discussions of VCA donation.
Sample and recruitment
An explanatory sequential mixed-methods design 17 was employed. Specifically, brief surveys were administered before subjects participated in in-person focus groups that assessed and explored attitudes and knowledge regarding VCA donation. Because markedly lower levels of organ donation knowledge, less favorable attitudes, as well as lower rates of FDM authorization have been documented among ethnic minorities and groups with lower educational attainment, 18,19 sampling was purposive. Specifically, we sought to capture this variation by concentrating recruitment on five demographic groups: (1) Whites with high educational attainment, (2) Whites with lower educational attainment, (3) Black Americans, (4) Latinx Americans, and (5) Asian Americans. The first phase of recruitment was accomplished via paid Facebook advertisements marketed to platform users above the age of 18 and living in the greater Philadelphia region from 1 March 2019 to 26 March 2019. The paid advertisements linked to an online Qualtrics form, which collected demographic and contact information of those interested in participating. We also posted informational flyers, staffed tables at community events, and leveraged long-standing relationships with community stakeholders to promote the study.
Potential participants were contacted by telephone, briefly screened, and scheduled for a focus group session. Participants were deemed eligible after meeting the following inclusion criteria: 18+ years of age, English-speaking, and with no obvious cognitive or decisional impairment. Participants received a $50 honorarium for completing the survey and focus group discussion. The study was deemed exempt by the Temple University Institutional Review Board (Protocol #25254). We used the Consolidated Criteria for Reporting Qualitative Research checklist for guidance in reporting qualitative research. 20
Data collection
Before each focus group began, participants read and signed an informed consent document and completed a brief selfadministered survey, based on validated instruments used in past studies. [21][22][23][24] The survey gathered baseline knowledge and attitudes of organ donation and sociodemographic information to characterize the sample. Sixteen 5-point Likerttype scale questions assessed attitudes toward organ donation; higher scores indicated higher agreement with each statement. Two questions gauged willingness to donate solid organs and a single dichotomous item assessed registered organ donor status (registered/not registered).
The focus groups were conducted on Temple University's center city campus in Philadelphia, PA, easily accessible via private or public transportation. Focus groups were moderated by a member of the study team with extensive training and experience with qualitative methods (GPA) and facilitated by another trained team member (E.E.D.). Focus groups opened with a discussion about organ and tissue donation to provide background and context for VCA donation. A moderator's guide focused on five domains: (1) knowledge of organ transplantation, (2) knowledge of tissue transplantation, (3) attitudes about organ donation, (4) attitudes about tissue donation, and (5) knowledge and attitudes about VCA donation. Focus groups also explored respondents' receptivity to receiving a VCA and the information needed to make an informed VCA donation decision. The guide was informed by interviews with donation professionals. 12 Focus groups lasted from 78 to 110 min, were audio-recorded for accuracy, and transcribed verbatim.
Analytic plan
Descriptive statistics were used to characterize the sample and VCA attitudes and knowledge. Spearman's rank correlation coefficients assessed the relationship between scale items. SAS 9.4 was used for statistical computations, with values considered significant at α = 0.05.
Focus group transcripts were uploaded to MAXQDA (2018), a qualitative analysis software package, for thematic content analysis. 25 An initial coding schema was developed deductively from moderator's guide questions; additional codes were identified inductively through transcript review for emergent themes. A final codebook, containing coding rules, definitions, and examples, guided independent coding by two trained study staff. Inter-coder reliability was achieved at 84% agreement. Disagreements were resolved by discussion, and final coded transcripts used for analysis were reconciled cases reflecting coder consensus. 26
Results
A total of 208 individuals expressed interest in participating. Screening for eligibility led to a total sample of 53 individuals (25.5%) who met in six focus groups with 7 to 12 participants in each group. Participants ranged in age from 18 to 71 years, with a mean age of 41 years. Participants were primarily female and over half had earned a bachelor's degree or higher. Twenty-one participants identified as White, 16 Black, 6 Asian, and 12 as Hispanic (Table 1).
A comparison of pre-focus group surveys and focus group findings revealed several themes: (1) strong initial reactions toward VCAs, (2) limited knowledge of and reservations about VCAs, (3) risk versus reward in receiving a VCA, (4) information needed to authorize VCA donation, (5) attitudes toward donation, (6) mistrust of the organ donation system. Each theme is described in detail in the following sections.
Strong initial reactions to VCA
Initial reactions to the idea of VCA transplantation as a viable treatment option varied widely. Most participants expressed awe and curiosity about advancements in medical technology. A 21-year-old Hispanic woman said, "I'm just so amazed that medicine has come this far. That that is something that's possible. I really, I'm happy about that. That's awesome" (P2, FG4). Similarly, a 61-year-old Black man shared, "it's amazing to me that the technology has advanced that far that you can do stuff like that" (P1, FG2). However, not everyone shared the same enthusiasm. A 22-year-old Asian woman stated, "I'm not exactly comfortable with it . . . that is literally your face on another person's face" (P6, FG1). A 37-year-old Black woman had a strong visceral reaction to the idea of VCA donation proclaiming, "Hell to the no!" (P1, FG1). These initial responses gave way to more nuanced rationale for both supporting and opposing VCA donation.
Limited knowledge of and reservations about VCA
The group discussions also exposed limited understanding of VCAs generally, as well as more specific reservations about VCA donation. Apprehensions centered on bodily mutilation and the loss or transfer of identity. Transfer and loss of identity. Loss of the VCA donor's identity was a significant concern, with participants least receptive about face donation for this reason. A Black woman, age 61, explained, I struggle with the face probably more than anything. Something about a face is to me the biggest genetic representation of your parents more so than your hands or your feet or anything like that. If you put my feet and my sister's feet in a line up they'd . . . be different but nothing distinguishing about either one as opposed to our faces. (P4, FG1) Respondents were distressed that the face-a defining feature of one's identity-could be transplanted on another individual. A 60-year-old Black woman explained, "I don't want to walk down the street and see the exact face of my loved one because it's been transplanted to somebody else" (P2, FG2). Face recipients were commonly believed to appear either similarly or identically to their donors.
Risk versus reward in receiving a VCA
The discussions revealed a general unwillingness to receive a VCA, although 85% of participants reported a willingness to receive a solid organ transplant in the survey. In addition to issues described above, participants cited concerns that an intensive recovery would outweigh an allograft's potential benefits. A 64-year-old White woman explained, If I thought there was going to be a prolonged recovery period that was miserable, I think I would probably rather do without.
Information needed to authorize VCA donation
In order to make an informed decision about donating a VCA, participants wanted more information, including details about the potential recipient, expected procedure outcomes, impact on funeral arrangements, and the possibility of contacting the recipient. For instance, a White woman, age 29, explained, "I could lean more 'yes' to being willing . . . providing that I know more specifically where [the donated VCA] may be going" (P3, FG1). Speaking specifically about the age of the recipient as a decisional factor, a Black man, age 57, said, "It depends on what stage of life you're in. It's a big difference if a five-or ten-year old kid [needed a VCA] or seventeen or twenty-something adult versus a 55-or 75 year old person" (P7, FG4). A 65-year-old White participant explained that she would want to know "What it would improve. Or how much and in what ways it would improve someone's life" (P9, FG4). Regarding funeral arrangements, a 68-year-old Black woman asked, "If they're not going to be cremated, will you have to let them know you'll have to have a closed casket for a funeral, all that stuff so they can take it in before they make a final decision?" (P2, FG1). The extent of contact between recipient and donor families was also of significant interest. A 60-year-old Black woman asked, "So with VCA is that same option given where you can reach out to the donor's family to say thank you or is that anonymity there?" (P2, FG2).
Potential donor's wishes. Participants were also reluctant to authorize a VCA donation on behalf of a loved one when the decedent's wishes were unknown. The primary concern was that a donor may have neither known about VCA donation nor expressed any support for it. A Black man, age 57, explained, "If we talked about it before they passed away, then I would do it. But if they never agreed or never talked about it and discussed it, then I'm more inclined not to" (P7, FG4). Others mentioned that surrogate decisions would be more difficult than personal ones. A 22-year-old White woman stated, "If it's my mom, I know she wants to donate her organs, but I don't know if she wants to go this far. Like I think that would be a harder choice for me to make" (P5, FG5). Respondents seemed to not extrapolate wishes concerning donation of more common organs, such as kidney and heart to wishes concerning VCA donation.
Attitudes toward donation
Survey responses indicated a moderate to strong willingness to donate solid organs upon death ( Table 2). Over half of the sample were registered organ donors. Comparatively, slightly less than half of participants expressed a willingness to enroll in a VCA donation registry. When surveyed about donating a family member's organs and tissues, willingness declined to 41.5%, with most indicating that they would do so only if they knew their family member supported donation.
Participants offered similar reasons for supporting solid organ, tissue, and VCA donation, including making "something positive come out of death" (Table 3). Group discussion revealed that comfort with donating solid organs, tissues, and VCAs resulted from the belief that they would not be needed after death. For example, a 68-year-old woman stated, "I won't need them anymore once I'm gone. So, somebody else can live on with my organs" (P2, FG1). An Asian man, age 24, agreed saying "Oh, I don't care, I'm dead" (P4, FG6).
Mistrust of the organ donation system
Black respondents across all focus groups expressed distrust of the US organ donation system, citing historical, systemic racism and the existence of a black market. Speaking of community-held beliefs, a Black woman, age 58 said, "There's a lot of racial myths that go on. Like they think that if you're African American, you don't get a preference towards getting a transplant or that you might be targeted to be a donor" (P3, FG5). Some participants made specific references to the Tuskegee Syphilis Study. 27
Discussion
In addition to the 22 patients currently waitlisted for VCAs, there are millions of Americans living with limb loss or severe facial disfigurement and over 1600 service men and women living with single or bilateral injuries 28 for which VCA transplantation may be a viable treatment option. To ensure this therapeutic option is available to all Americans in need, concerted effort must be made to increase both firstperson and family authorization of VCA donation, an option typically offered to FDM upon the death of family member and after asked to donate solid organs and tissues for transplantation and research purposes. This investigation explored the general public's attitudes, beliefs, and knowledge about VCA transplantation using a mixed-methods design. Our findings support those of other large survey studies of VCA [13][14][15][16] while generating novel information on the rationale behind VCA attitudes and behaviors. General knowledge about VCAs was low. Descriptions of VCA transplantation provided by the moderator were met with astonishment but also a hesitation for personal and surrogate VCA donation. While all participants were familiar with solid organ and tissue donation, there was progressive decline in knowledge from solid organs to tissue to VCA. Group discussions revealed the general public is largely unaware this new type of transplantation exists and elicited novel concerns about VCA donation distinct from those cited for other donation types. All participants exhibited limited VCA knowledge despite its coverage in the mass media. [29][30][31] For most participants, support of organ and tissue donation was grounded in a belief that the act of donation makes something positive come out of death, reinforcing past findings on organ and tissue donation. 32 Consistent with prior research on organ and tissue donation, 32-34 respondents supportive of VCA donation also expressed a desire to help others in need and the belief that they would no longer need their body after death. Another factor contributing to willingness to donate was knowing the potential donor's wishes. Yet, while 56.5% of the sample were registered organ, tissue and eye donors, only 49.0% were willing to join a registry specifically for VCA donation.
Specific concerns about VCA donation were also revealed. While bodily mutilation has been noted as a source of reluctance for solid organ and tissue donation, 21,33-36 respondents reacted more viscerally to the idea of the donor being "cut up," evoking imagery from horror films. Participants further distinguished VCA donation from solid organ and tissue donation, stating the former is visibly obvious. This concern was also discussed in relation to burial practices and the ability to have open casket funerals. While posthumous prosthetics may acceptably substitute for a donated hand, foot or limb, face donation precludes a viewing. In addition, respondents expressed unease about face donations, associating faces as critical markers of identity. Participants expressed concern about seeing their loved one's face on a stranger, not knowing that face grafts assume the underlying bone structure of the recipient and would bear little to no resemblance to the donor. They also felt strongly that a loved one's hands would be identifiable. These concerns are unique to VCA donation.
Black participants expressed a distinct mistrust of organ and tissue donation across all focus group sessions, more so than any other racial/ethnic group sampled. Mirroring welldocumented concerns among this population, 37-41 respondents across generations pointed to a belief in a black market of organs and systemic racism that targeted and manipulated Black Americans, citing the Tuskegee Syphilis Study specifically. Clearly, medical mistrust remains a major barrier to organ donation among Black Americans, despite decades of public awareness campaigns and interventions targeting this community. [38][39][40][41][42][43][44][45][46][47][48][49][50][51] Future research that examines and tests culturally targeted and acceptable messaging during VCA donation discussions may help to address Black families' hesitance about solid organ and tissue donation and may make VCA donation better received among Black communities.
These findings have implications for both public education about VCAs and VCA donation discussions. Limited knowledge about VCAs demonstrates a clear need to increase public awareness of this treatment option, which would make VCA donation as familiar and commonplace as organ and tissue donation and dispel its likeness to astounding technological advancements only found in science fiction. A recent study reported increased willingness to donate facial allografts after exposure to an educational video on the topic. 52 However, most available public education materials do not adequately address concerns regarding transference/ loss of identity, bodily mutilation, and the possibility of recognizing face donors. 53 Improving public awareness may also stimulate conversations among families/kin, thus making potential FDMs aware of their loved ones' VCA donation wishes, which was noted to be a major decisional factor (Theme 4) by participants in this study. Furthermore, increasing public conversations could increase VCA authorization or increase FDM comfort/confidence in their VCA donation decision-making. Inserting VCA into the American lexicon may also reduce families' initial surprise and confusion if approached for VCA donation. Future public education campaign messaging should underscore VCAs' benefits, while preempting concerns identified in this study. To be most effective, these campaigns should utilize interpersonal channels for communicating information about VCA donation and provide an opportunity for enrollment on a registry. 54 OPOs training donation professionals to effectively communicate the VCA donation opportunity to families of deceased potential donors should consider the unique challenges of the process. Our recent work has found that current VCA donation approaches are neither uniform throughout the United States nor are there sufficient existing resources for donation professionals to successfully lead such discussions. 12 Moreover, the positive, curious reactions espoused by participants when first learning about VCAs indicates that the wider public may respond similarly, which could be leveraged to introduce and encourage VCA donation as a unique opportunity for substantially improving potential recipients' lives. Incorporating VCA-related questions and concerns into future training for donation professionals is likely to create higher levels of comfort, confidence, and competence in their communication with FDM. 12,22,23,35,[55][56][57] Evidencebased training that provides suggested language and communication techniques for allaying families' concerns would ensure donation professionals are well equipped to obtain family authorization and support families' VCA donation decision-making process.
The strengths of this study include the mixed-methods approach, which yielded rich, nuanced data to fill an important gap in our understanding of public perceptions of VCA donation, and the relatively large sample size (N = 53). The study also has limitations. First, the sample was drawn from a single metropolitan area. In addition, over half of participants had at least a 4-year college degree, a considerably higher proportion than the general population (21.3%). 58 The sample was predominantly female. Furthermore, recruitment messaging stated that focus group participants were needed for research about organ donation. As such, selection bias is possible. Respondents in a more generalizable sample of the population might express even less knowledge about and support for VCA donation, and even greater hesitancy about donation of VCAs. This mixed-methods study highlights unique concerns and questions about VCA donation in general and multiple types of VCAs not previously identified in past studies. As with solid organs and tissues, authorization of VCAs in the hospital remains the primary way to ensure an adequate supply for those in need and the only means of growing VCA transplantation as a treatment modality. The results support the need for initiatives to raise public awareness about VCAs, as public acceptability of and the ultimate authorization of VCAs are prerequisite to the continued development of vascularized composite allotransplantation as a transplantation subfield.
Author contributions
Heather M Gardiner participated in research design, writing of the paper, and data analysis. Ellen E Davis participated in research design, the performance of the research, writing of the paper, and data analysis. Gerard P Alolod participated in research design, the performance of the research, writing of the paper, and data analysis. David B Sarwer participated in research design, writing of the paper, and data analysis. Laura A Siminoff participated in research design, writing of the paper, and data analysis.
Declaration of conflicting interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This work was supported by the US Department of Defense (grant numbers W81XWH-18-1-0680 and W81XWH-18-1-0679).
|
2022-09-17T15:20:04.166Z
|
2022-01-01T00:00:00.000
|
{
"year": 2022,
"sha1": "423cb9f9651bf6fa97c7de0d51749961b8fcc965",
"oa_license": "CCBYNC",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "90f4cc51f6c5ae38256d9c94d2172f2c6c15d6af",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
}
|
1594086
|
pes2o/s2orc
|
v3-fos-license
|
Adult Intra-Thoracic Kidney: A Case Report of Bochdalek Hernia
Introduction. Bochdalek hernia is a congenital posterior lateral diaphragmatic defect that allows abdominal viscera to herniate into the thorax. Intrathoracic kidney is a very rare finding representing less than 5% of all renal ectopias with the least frequency of all renal ectopias. Case Presentation. We report a case of a 62-year-old man who had a left thoracic kidney associated with left Bochdalek hernia. Abdominal X-ray and chest X-ray revealed dilated loops of the colon above left hemidiaphragm. Abdominal ultrasound (US) showed the right kidney with many fluid and esophytic cysts; left kidney was unfeasible to study because of the impossibility to find it. Computed Tomography (CT) basal scan demonstrated a left-sided Bochdalek hernia with dilatated colon loops and the left kidney within the pleural space. Magnetic Resonance (MR) confirmed a defect in left hemidiaphragm with herniation of left kidney, omento, spleen and colon flexure, and intrarotation with posterior hilum on sagittal plane. Conclusion. The association of a Bochdalek hernia and an intrathoracic renal ectopia is very rare, that pose many diagnostic and management dilemmas for clinicians. Our patient has been visualized by CT and MR imaging. A high index of suspicion can result in early diagnosis and prompt intervention with reduced morbidity and mortality.
Introduction
Bochdalek hernia is a congenital posterior lateral diaphragmatic defect that allows abdominal viscera to herniate into the thorax [1].
It is the most common type of congenital diaphragmatic hernias and occur in approximately 1 in 2,200-12,500 live births; they are seen with much greater frequency on the left hemithorax and associated to a normal diaphragm [2,3].
Intra-thoracic kidney is a very rare finding representing less than 5% of all renal ectopias with the least frequency of all renal ectopias [4][5][6]; most are found in males and are asymptomatic. The incidence of intra-thoracic renal ectopia as a result of congenital diaphragmatic hernia was reported to be less than 0.25% [4].
We report a case of a man who had a left thoracic kidney associated with left Bochdalek hernia.
Case Report
A 62-year-old man came to our centre to make a chest X-ray and abdominal X-ray. He referred to cough from 1 month, abdominal pain particularly post-prandial, and difficult to urinary.
Abdominal X-ray and chest X-ray revealed a dilated loops of the colon above left hemidiaphragm ( Figure 1). He did not suffer respiratory distress or recurrent pleural effusion.
The patient underwent also renal and bladder ultrasound (ATL HDI 5000); the right kidney presented many fluid cysts, a few with esophytic growth. Left kidney was unfeasible to study because of the impossibility to find it. Bladder's wall was thickened ( Figure 2).
Radiologist decided to perform Computed Tomography (CT) study in order to evaluate left kidney and bladder. Computed Tomography basal scan was performed because of serum creatinine levels of 2,6 mg/dl and azotemia value of 89 mg/dl.
Computed tomography showed a left sided Bochdalek hernia with dilatated colon loops and the left kidney within the pleural space. The intra-thoracic kidney presented a hilum in posterior position and an elongated and expanded ureteropelvic junction and the remaining portion of ureter. The contralateral kidney presented multiple esophytics cysts, with regular urinary tract ( Figure 3).
To make a functional study of patient, a high field (3T) Magnetic Resonance (Intera, Philips Medical Systems, Best, Netherlands) was performed. After a survey scan and reference scan, an axial T1 turbo spin echo (TSE), axial STIR, and T2 weighted breath hold were used both in axial, coronal, and sagittal plane with a 2 mm thickness partition without a gap.
A bolus injection of gadolinium (Gd) Gadoteridol (Pro-Hance) at the standard single dose of 0,1 mmol/kg of body weight was administered at the rate of 2,5 mL/sec, using an automatic injector to make urographic study.
MRI confirmed left kidney intra-rotation with posterior hilum on sagittal plane. Contrast-enhanced sequences demonstrated normal renal arteries; a perfusion delay compared to right kidney was observed due to traction phenomena of vascular pedicle (Figures 4 and 5). Patient was invited to urologic and nephrologic examination.
Discussion
Bochdalek's hernia (posterolateral defect, pleuroperitoneal hernia), firstly described by Bochdalek in 1848 [7], is a congenital posterior lateral diaphragmatic defect that allows abdominal viscera to herniate into the thorax, resulting from failed closure at 8 weeks of gestation of the pleuroperitoneal ducts, primitive communications between the pleural and abdominal cavities [1,3]. It is more common in infants (90%) with an incidence of 1/2500 live births; however, the literature on Bochdalek hernia in adulthood is rather limited, with approximately 100 cases reported [2,[8][9][10][11][12][13][14] even if asymptomatic prevalence in the general population may be as high as 0.17% [10,15]. It occurs most frequently on the left side with approximately 80% being left-sided and 20% rightsided [16]. This is presumably due to the pleuroperitoneal canal closes earlier on the right side [17], or to narrowing of the right pleuroperitoneal canal by the caudate lobe of the liver [18].
Bilateral Bochdalek's hernias are rare [16,17]. These hernias are usually congenital and may cause severe lifethreatening respiratory distress in the first hours or days of life. Herniated organs are frequently the omentum, bowel, spleen, stomach, kidney, and pancreas on the left, and part of the liver on the right. Because of the pulmonary hypoplasia due to the compression of the lungs by the adjacent hernia, these patients are frequently symptomatic at birth.
Although this condition usually presents in the neonatal period with severe respiratory distress, a few cases being asymptomatic until adult life have also been reported in literature and are usually associated with a better outcome [19][20][21].
In childhood, they are often misdiagnosed as pleuritis, pulmonary tuberculosis, or pneumothorax, and this can result in significant morbidity.
In adults, like infants, most occur on the left side (85%), usually causing gastrointestinal symptoms. In contrast to Case Reports in Medicine 5 the acute presentation by infants with these hernias, most adults present with more chronic abdominal symptoms [22], such as recurrent pain, vomiting, and postprandial fullness [23]. Chronic dyspnea, pleural effusion, and chest pain are the most common chest symptoms and signs that are present in this condition [8].
Diagnosis requires a high suspicious index and needs to be confirmed with image studies. In adults, Bochdalek's hernias are diagnosed incidentally but most cases become surgical emergencies when an abdominal organ is strangled [3]. While urgent surgery is frequently needed for the treatment of the symptomatic Bochdalek hernia, the surgical treatment of asymptomatic Bochdalek hernias may be performed days to years later according to the patient's status. Larger hernias should be operated because of potential complications.
Renal ectopia describes a kidney that is not located in its usual position. Ectopic kidneys are thought to occur in approximately 1 in 1,000 births, but only about 1 in 10 of these are ever diagnosed [6].
Some of these are discovered incidentally, such as when a child or adult is having surgery or an X-ray for a medical condition unrelated to the renal ectopia.
The complex embryological development of the kidneys can lead to renal anomalies, such as renal ectopia. Most ectopic kidneys are found in the lower lumbar or pelvic region secondary to failure to ascend during fetal life [24].
With a prevalence of less than 0.01%, intra-thoracic kidneys represent less than 5% of all renal ectopias with the least frequency of all renal ectopias [4][5][6].
Wolfromm [25] reported the first case of clinically diagnosed intra-thoracic kidney in 1940. In 1988, S. M. Donat and P. E. Donat [4] reviewed cases reported in the literature between 1922 and 1986, and found the abnormality to occur more commonly on the left (62%) than on the right side (36%); 2% of the patients had bilateral intra-thoracic kidney. In addition, this anomaly is observed with higher frequency in males (63%) than in females (37%) [26].
Pfister-Goedeke and Burnier [27] classified the thoracic kidneys into 4 groups: thoracic renal ectopia with closed diaphragm, eventration of the diaphragm, diaphragmatic hernia (congenital diaphragmatic defects or acquired hernia such as Bochdalek hernia), and traumatic rupture of the diaphragm with renal ectopia.
The incidence of intra-thoracic kidney with Bochdalek hernia is reported to be less than 0.25% [4], and the relationship between them remains uncertain. The embryological origin is debatable: various authors have proposed that there exists either an abnormality in the pleuroperitoneal membrane fusion or an abnormality in the high migration of the kidney due to delayed mesonephric involution [28].
Intra-thoracic kidney associated with Bochdalek hernia differs from other intra-thoracic renal ectopias as it tends to be mobile and easily reduced from the thorax to the abdominal cavity with other organs. [26] Commensurate herniation of abdominal viscera is common.
In all cases, the kidney is located within the thoracic cavity and not in the pleural space; the renal vasculature and ureter on the affected side typically exit the pleural cavity through the foramen of Bochdalek and are usually significantly longer than those in the normally positioned kidney [29]. Most intra-thoracic kidneys remain asymptomatic and have a benign course [30].
Anatomically, the features of intra-thoracic kidney are rotational anomalies such as the hilus facing posteriorly, long ureter, high origin of the renal vessels, and occasionally medial deviation of the lower pole of the kidney [26,31,32].
Treatment for the ectopic kidney is only necessary if obstruction or vesicoureteral reflux (VUR) is present. There is an increased incidence of ureteropelvic junction obstruction, VUR, and multicystic renal dysplasia in ectopic kidney [6,29].
If the kidney is not severely damaged by the time the abnormality is discovered, the obstruction can be relieved or the VUR corrected with an operation. However, if the kidney is badly scarred and not working well, removing it may be the best choice [6,29].
Our patient had an elongated ureter, medially deviated lower pole, and rotational abnormality in which the hilum was posterior. The left intra-thoracic kidney and the left Bochdalek hernia in our patient has been visualized by CT and MR imaging.
Intra-thoracic kidneys are rare clinical entities that pose many diagnostic and management dilemmas for clinicians. The association of a Bochdalek hernia and an intra-thoracic renal ectopia is very rare. It is emphasized that this condition should be considered in the differential diagnosis of a lower intra-thoracic mass. A high index of suspicion can result in early diagnosis and prompt intervention with reduced morbidity and mortality.
|
2014-10-01T00:00:00.000Z
|
2010-08-30T00:00:00.000
|
{
"year": 2010,
"sha1": "9de125cd671db8cd902ac2c1d2f7ccc22a4db3fe",
"oa_license": "CCBY",
"oa_url": "http://downloads.hindawi.com/journals/crim/2010/975168.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "299290e7939a78388d369cd52ef5948bc167bb88",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
119324993
|
pes2o/s2orc
|
v3-fos-license
|
Generalizations of Kaplansky Theorem for some (p,k)-Quasihyponormal Operators
In the present paper, we generalized some notions of bounded operators to un- bounded operators on Hilbert space such as k-quasihyponormal and k-paranormal unbounded operators. Furthermore, we extend the Kaplansky theorem for normal operators to some (p; k)-quasihyponormal operators. Namely the (p; k)-quasihyponormality of the product AB and BA for two operators.
INTRODUCTION
Through out the paper we denote Hilbert space over the field of complex numbers C by H and the usual inner product and the corresponding norm of H are denoted by , and . respectively. Let us fix some more notations.We write B(H) for the set of all bounded linear operators in H whose domain are equal toH . For an operator A ∈ B(H) ,the range, the kernel and the adjoint of A are denoted by R(A), N (A) and A * respectively. If M is a Let A and B be normal operators on a complex separable Hilbert space H. The equation AX = XB implies A * X = XB * for some operator X ∈ B(H) is known as the familiar Fuglede-Putnam theorem. (See [14]).
Consider two normal (resp. hyponormal) operators A and B on a Hilbert space. It is known that, in general, AB is not normal (resp. not hyponormal). Kaplansky showed that it may be possible that AB is normal while BA is not. Indeed, he showed that if A and AB are normal, then BA is normal if and only if B commutes with AA * , (see [12]).
In [18,Theorem 3], Patel and Ramanujan proved that if A and B ∈ B(H) are hyponormal such that A commutes with |B| and B commutes with |A * | then AB and BA are hyponormal.
The study of operators satisfying Kaplansky theorem is of significant interest and is currently being done by a number of mathematicians around the world.Some developments toward this subject have been done in [6,10,12,15,16,17] and the references therein.
The aim of this paper is to give sufficient conditions on two some (p, k)-quasihyponormal operators (bounded or not), defined on a Hilbert space, which make their product (p, k)quasihyponormal. The inspiration for our investigation comes from [1], [15] and [17].
The outline of the paper is as follows. First of all,we introduce notations and consider a few preliminary results which are useful to prove the main result. In the second section we discussed conditions which ensure hyponormality,k-quasihyponormality or (p, k)-quasihyponormality of the product of hyponormal,k-quasihyponormal or (p, k)quasihyponormality of operators. In Section three, the concepts of k-quasihyponormal and k-paranormal unbounded operators are introduced.We give sufficient conditions which ensure k-quasihyponormality (k-parnormality or k-*-paranormality) of the product of kquasihyponormal ( k-paranotmal or k − * -paranormal) of unbounded operators.
KAPLANSKY LIKE THEOREM FOR BOUNDED
(p, k)-QUASIHYPONORMAL OPERATORS The next definitions and lemmas give a brief description for the background on which the paper will build on.
These classes are related by the proper inclusion. (See [14]).
We need the following lemma which is important for the sequel.
(2) If range of C is dense in H then Proof. This proof will be left to the reader.
The following famous inequality is needful.
Hansen's inequality ) Let A, B ∈ B(H) such that A ≥ 0 and B ≤ 1 then Kaplansky showed that it may be possible that AB is normal while BA is not. Indeed, he showed that if A and AB are normal, then BA is normal if and only if B commutes with |A|.
Kaplansky theorem´s has been extended form normal operators to hyponormal operators and unbounded hyponormal operators by the authors in [1]. We collect some of their results in the following theorem. (1) If A is normal and AB is hyponormal then (2) If A is normal and AB is co-hyponormal then (3) If A is normal, AB is hyponormal and BA is co-hyponormal then BAA * = AA * B ⇐⇒ AB and BA are normal.
We give another proof of Kaplansky theorem.
Theorem 2.2. [Kaplansky, [12]) Let A and B ∈ B(H) be two bounded operators such that AB and A are normal. Then Assume that A * AB = BA * A and we need to prove that BA is normal.
It is well know that A is normal if and only if Ax = A * x for all x ∈ H.
Since AB is normal we have (AB)Ax = (AB) * Ax for all x ∈ H and we deduce that By hypotheses given in the theorem, we have Form the identities above we have T x, A * Ax = 0 for all x ∈ H. This implies that . BA is normal.
If N (A) = {0}. Suppose that, contrary to our claim, the operator T ≡ 0. There exists From this we deduce that there exists z 0 ∈ N (A) so that T x 0 , z 0 = 0.
As usual, this leads to the statement that This means that, x 0 ∈ N (A) and This contradicts the assumption that T x 0 = 0.
(2) ′′ ⇐= ′′ The reverse application is even evidence of Kaplansky We have ABA = ABA ⇒ (AB)A = A(BA), and by the theorem of Fuglede-Putnam Consider two quasihyponormal operators A and B on a Hilbert space. It is known that, in general, AB is not quasihyponormal. A simple calculation shows that A and B are quasihyponormal and AB does not quasihyponormal,since AB) * (AB) e 0 = 1and (AB) 2 e 0 = 0.
Denote by C mn the set of all m × n complex matrix.
In [8],the authors proved the following results. We show here the main results of this paper. Our intention is to study some conditions for which the product of operators will be hyponormal and k-quasihyponormal or (k, p)quasihyponormal. (2) If BU is hyponormal, then BA is hyponormal.
This shows that AB is hyponormal.
(2) Suppose that BU is hyponormal.Then This shows that BA is hyponormal.
(3) Assume that UB is quasihyponormal, then This shows that AB is quasihyponormal.
(4) Assume that BUis quasihyponormal. Then This shows that BA is quasihyponormal.
we even have evidence to k-quasi-hyponormal. Proof. Let x ∈ H, (1) If A * A k B = BA * A k and A j B j = (AB) j for j ∈ {k, k + 1},then AB is k-quashyponormal.
(2) If B * B k A = AB * B k and B j A j = (BA) j for j ∈ {k, k + 1},then BA is k-quashyponormal. Proof. (1) for all x ∈ H. Ae n = α n e n and Be n = e n+1 , ∀n ≥ 1 respectively. Assume further that α n is bounded, real-valued and positive, for all n. Hence A is self-adjoint (hence normal!) and positive. Then ABe n = α n+1 e n+1 , ∀n ≥ 1.
For convenience, let us carry out the calculations as infinite matrices. Then It thus becomes clear that AB is quasi hyponormal iff α n ≤ α n+1 . Similarly BAe n = α n e n+1 , ∀n ≥ 1.
Whence the matrix representing BA is given by: Therefore, Accordingly, BA is quasi hyponormal if and only if α n ≤ α n+1 (thankfully, this is the same condition for the hyponormality of AB). Finally, Thus BA is quasinormal.
Proposition 2.7. Let A, B ∈ B(H) such that A is normal and AB is paranormal. Then Proof. Let A = U|A| with U is unitary. Since A is normal we have |A|U = U|A| and hence B|A| = |A|B.This then gives From this fact we obtain that for all unit vector x ∈ H, Hence BA is paranormal operator. Then T = ωA + B is quasihyponormal for all ω ∈ C.
Proof. By the same arguments as in the proof of the proposition above, we have The next result is a necessary and sufficient condition for span{A, B} to be quasihyponormal.
Theorem 2.7. Let A and B ∈ B(H) such that A is (p, k)-quasihyponormal and B is invertible. If A commute with B and B * then AB is (p, k)-quasihyponormal for 0 < p ≤ 1 and k ≥ 1. Proof.
KAPLANSKY LIKE THEOREM FOR UNBOUNDED k-QUASI-HYPONORMAL OPERATORS
In this section, we generalized some notions of bounded operators to unbounded operators on a Hilbert space and we give sufficient conditions which ensure k-quasihyponormality (kparnormality or k-*-paranormality) of the product of k-quasihyponormal ( k-paranotmal or k − * -paranormal) of unbounded operators.
For an A ∈ Op(H), define A 2 by We can define higher powers recursively. Given A n , define Let us begin with the concept of k-quasihyponormality. (2) the class of k-quasihyponormal operators properly contains the classes of k ′ -quasihyponormal (k ′ < k).
Definition 3.2. A densely defined operator A : D(A) ⊂ H −→ H is said to be
(1) k-paranormal for some positive integer k if
or equivalently
Ax k ≤ A k x for every unit vector x ∈ D(A k ).
(2) k − * -paranormal for some positive integer k if D(A) ⊂ D(A * ) and
or equivalently D(A) ⊂ D(A * ) and
A * x k ≤ A k x for every unit vector x ∈ D(A k ). A simple calculation shows that the adjoint of unilateral forward weighted shift is given by A * e n = ω n e n−1 for all n ∈ Z.
By this we have A * Ae n = ω 2 n e n and A 2 e n = ω n ω n+1 e n+2 .
Consequently
A * Ae n ≤ A 2 e n for all n ∈ Z.
Which implies that the operator A is quasihyponormal.
Proof. In fact Proof. Let us suppose that A is k-quasihyponormal. Then it follows that the following relation holds: This means that This completes the proof of the proposition.
Proof. As A is k-quasihyponormal,we have by definition that D(A) ⊂ D(A * ) and Since A is invertible with an everywhere defined bounded inverse, we have for all x ∈ D(A) : Hence we my write Proof. Since A is normal, we have A = P U = UP with P ≥ 0 and U unitary.
A simple computation shows that This implies that (BA) is k-quasi hyponormal.
Proposition 3.5. Let A ∈ B(H) and B : D(B) ⊂ H −→ H be closed densely defined operator such that A and B are k-quasihyponormal. If A * A k B ⊆ BA * A k and A j B j ⊆ (AB) j for j ∈ {k, k + 1},then AB is k-quashyponormal.
Proof. Since A ∈ B(H) and B is closed densely defined, it is well known that (AB) * = B * A * .
Hence we may write This completes the proof. (1) If AB is k-paranormal and A * AB ⊆ BA * A, then BA is k-paranormal.
Proof. (1) Using the normality of A = U|A| = |A|U ( U unitary) and the fact that we see that BA = U * ABU.
Now we have
(2) By similar argument.
|
2016-02-08T15:33:41.000Z
|
2016-02-08T00:00:00.000
|
{
"year": 2016,
"sha1": "67d6d2a07d81b1496a44d439ddac59f51a885607",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "67d6d2a07d81b1496a44d439ddac59f51a885607",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
}
|
208261731
|
pes2o/s2orc
|
v3-fos-license
|
How Uncertain is the Survival Extrapolation? A Study of the Impact of Different Parametric Survival Models on Extrapolated Uncertainty About Hazard Functions, Lifetime Mean Survival and Cost Effectiveness
Background and Objective The extrapolation of estimated hazard functions can be an important part of cost-effectiveness analyses. Given limited follow-up time in the sample data, it may be expected that the uncertainty in estimates of hazards increases the further into the future they are extrapolated. The objective of this study was to illustrate how the choice of parametric survival model impacts on estimates of uncertainty about extrapolated hazard functions and lifetime mean survival. Methods We examined seven commonly used parametric survival models and described analytical expressions and approximation methods (delta and multivariate normal) for estimating uncertainty. We illustrate the multivariate normal method using case studies based on four representative hypothetical datasets reflecting hazard functions commonly encountered in clinical practice (constant, increasing, decreasing, or unimodal), along with a hypothetical cost-effectiveness analysis. Results Depending on the survival model chosen, the uncertainty in extrapolated hazard functions could be constant, increasing or decreasing over time for the case studies. Estimates of uncertainty in mean survival showed a large variation (up to sevenfold) for each case study. The magnitude of uncertainty in estimates of cost effectiveness, as measured using the incremental cost per quality-adjusted life-year gained, varied threefold across plausible models. Differences in estimates of uncertainty were observed even when models provided near-identical point estimates. Conclusions Survival model choice can have a significant impact on estimates of uncertainty of extrapolated hazard functions, mean survival and cost effectiveness, even when point estimates were similar. We provide good practice recommendations for analysts and decision makers, emphasizing the importance of considering the plausibility of estimates of uncertainty in the extrapolated period as a complementary part of the model selection process. Electronic supplementary material The online version of this article (10.1007/s40273-019-00853-x) contains supplementary material, which is available to authorized users.
Introduction
Estimates of lifetime mean survival are often a key component of cost-effectiveness analyses, as they typically quantify the benefits of new treatments. Cost-effectiveness analyses play an important role in reimbursement decisions [1]. Clinical trials typically have a shorter follow-up period then the time horizon required in a cost-effectiveness analysis. Hence, extrapolation of hazard functions is often required to estimate lifetime mean survival. This may be achieved by fitting commonly applied parametric survival models (as described in Sect. 2.1) to sample data. The National Institute for Health and Care Excellence Decision Support Unit Technical Support Document 14 describes different parametric survival models and suggestions for how to choose between them, highlighting the importance of considering uncertainty [2].
Extrapolation introduces additional uncertainty that does not occur for within-sample prediction. This is due to the absence of data to calibrate model estimates or validate their plausibility. For example, an exponential distribution may provide an adequate fit to the observed data. By definition, the suitability of the exponential model for the extrapolated period cannot be assessed from the observed
Key Points for Decision Makers
Guidance is available on choosing between parametric survival models used in a cost-effectiveness analysis. However, this does not consider the impact of model choice on uncertainty in extrapolated hazard functions and lifetime mean survival. Intuitively, we might expect that this uncertainty increases the further into the future we extrapolate.
We illustrate, using seven commonly applied parametric survival models and four hypothetical datasets, that the choice of survival model can have a marked impact on resulting estimates of uncertainty about the hazard function, lifetime mean survival and cost effectiveness. Estimates of uncertainty about extrapolated hazard functions could increase, decrease or be constant depending on the model used.
We provide recommendations on how the clinical plausibility of estimates of uncertainty about hazard functions and estimates of cost effectiveness should be used as part of the model selection process. methods (delta and multivariate normal approach) for when exact analytical solutions are not tractable. We then create four representative hypothetical datasets, reflecting hazard functions commonly encountered in clinical practice for use in case studies, to illustrate the impact of model choice on estimates of uncertainty. We used one of these datasets to perform a hypothetical cost-effectiveness analysis. Section 3 presents the results of the case studies and the cost-effectiveness analysis. In Sect. 4, we provide recommendations on how to use the impact of survival model choice on estimates of uncertainty as part of the model selection process. We focus on extrapolating a single arm of a trial.
Commonly Applied Parametric Survival Models
For this study, we considered seven commonly applied parametric survival models: exponential, Weibull, Gompertz, gamma, log-logistic, log-normal and generalised gamma distributions. With the exception of the Gompertz distribution, these models all belong to the generalised F family of distributions [5,6]. We originally also considered the generalised F model, but do not include it here, as the model estimation procedure did not always converge under the default settings [see Appendix 2 of the Electronic Supplementary Material (ESM) for more details]. The different survival models make different assumptions about their underlying hazard functions over time: an exponential distribution assumes a constant hazard; Weibull, Gompertz and gamma distributions allow for monotonically increasing or decreasing hazards over time; log-normal and log-logistic distributions allow the hazard function to be unimodal (also monotonically decreasing for the log-logistic) [6]. The generalised gamma distribution is the most flexible of the commonly applied models. It can model hazards that are constant, monotonic (increasing or decreasing), bathtub or arc shaped [7]. Table 1 describes the characteristics of seven commonly used survival models, including the survival function S(t) , hazard function h(t) and cumulative hazard function H(t) . These three functions are all related via the equation: We focus on the hazard function because it provides insights into the natural history of a disease along with any time-varying responses to treatment [8]. We also consider the survival function because this is a clinically important statistic. (1) data. External evidence, such as clinical opinion, may be used to support the plausibility of extrapolated estimates. However, even if the exponential distribution is deemed suitable, there remains uncertainty that the model parameter estimated from the observed data will be the same in the future. Hence, there is extrapolation uncertainty in both the suitability of the chosen model and the suitability of the estimated parameters. As such, there is often an expectation amongst analysts and decision makers that uncertainty about estimates of hazard functions (as quantified by their variance) should increase over the extrapolation period. The effect of this extrapolation uncertainty is recognised in the time-series literature, with extrapolations being associated with greater uncertainty than within-sample estimates [3,4]. To our knowledge, there has been little consideration of whether the use of commonly applied parametric survival models adequately reflects extrapolation uncertainty.
Our study had two aims. The first was to illustrate the impact of model choice on estimates of uncertainty about extrapolated hazard functions, estimates of lifetime mean survival and estimates of cost effectiveness. The second aim was to raise awareness of this impact when producing and critiquing survival models. We begin Sect. 2 by showing how to derive estimates of uncertainty of extrapolated hazard functions and the estimated lifetime mean survival using both analytical expressions and approximation
Estimating Uncertainty About Hazard and Survival Estimates
In this section, we describe how to quantify the uncertainty in the hazard and survival functions and uncertainty in estimates such as mean survival time. For illustration, we take a frequentist perspective and estimate parameters using maximum likelihood. Ideally, exact analytic expressions of variance would be available for the estimates of interest (hazard and survival functions, and mean survival time). However, as these are estimates of non-linear functions of model parameters, approximation methods are required. Exact analytical expressions are available for the exponential model. The maximum likelihood estimate of the model parameter is: where the subscript i denotes an individual, i = 1 for an event and zero otherwise, t i represents the observed times and N e represents the number of events. As described in Collet [6], the variance of the estimated hazard function is the variance of the estimated model parameter ̂ , given by: From Eq. 3, the variance of the hazard function is constant with respect to time, which means that the uncertainty does not 'fan out' over time. Thus, for the exponential model, uncertainty about the hazard function depends only upon the sample data that are used to estimate and does not depend on whether we are considering the observed or unobserved period.
Estimates of uncertainty about the exponential survival function can be derived from the hazard function by using is the cumulative standard normal distribution; (t; ) = ∫ t 0 x −1 e −x dx ( ) , and e denotes the exponential function. Allowing < 0 for the Gompertz implies that the survival function will never equal 0
Model (parameters) Survival function S(t) Cumulative hazard function H(t)
Hazard function h(t) Possible shapes of the hazard function Decreasing monotonically Increasing then decreasing
Constant
Increasing monotonically Decreasing monotonically Generalised gamma Increasing monotonically Decreasing monotonically the relationship in Eq. 1. For the exponential model, the estimate of mean survival ̂ is given by: A confidence interval for the estimated mean survival may be derived via the delta method: Exact analytical expressions of variance (for hazard and survival functions) are not available for the other six commonly used parametric survival models. Two different approximation methods are commonly used to estimate variances of a function: the delta method [9] and the multivariate normal method [10].
The delta method estimates the variance of a function based on a linear approximation of the function [6]. The delta method may be used whenever the derivative of a function can be calculated. This includes all of the commonly used parametric survival functions in Table 1. To illustrate its use, we use the delta method to estimate the variance of the hazard function for both the exponential and Weibull models in Appendix 1 of the ESM. For the exponential model, applying the delta method gives the same equation for variance in the hazard as Eq. 3.
The multivariate normal method assumes that the estimated model parameters ̂ follow a multivariate normal distribution: N ̂ , Var ̂ , where Var ̂ is estimated during model fitting. For example, ̂ = ̂ ,̂ for the Weibull model, and Var ̂ is the estimated variance-covariance matrix. Parameter samples are drawn from the normal distribution and used to generate sample estimates of both the hazard and survival functions using the formulas in Table 1. Variances and confidence intervals are then derived from these sample estimates. The multivariate normal method has been shown to provide similar estimates of uncertainty to the delta method [10]. Its main advantage over the delta method is that it is easier to implement as it avoids calculating derivatives.
The multivariate normal approximation is a Monte Carlo simulation-based method. If B Monte Carlo parameter samples are drawn from N ̂ , Var ̂ , with a single sample denoted as b ( b = 1, … , B ), then the variance of a function of the parameters, Var(g( )) , is approximated as: As this is a simulation-based method, it is not possible to derive analytic expressions for specific models, as in the case of the delta method for the Weibull in Appendix 1 of the ESM. Both the delta method and the multivariate normal approximation are used in common statistical software; the former in STATA and the latter in the flexsurv package in R [11,12].
Case Study: Datasets
We created four representative datasets to illustrate the impact of model choice on uncertainty in the estimated hazard and survival functions and mean survival. We generated all four datasets to have a sample size of 400, and mean survival of 0.9 years. We generated a dataset with a maximum follow-up of 1 year; any individuals who had not experienced an event by then were censored at 1 year. We applied no other censoring when creating the datasets. Each dataset may be viewed as describing outcomes for a single arm of a clinical trial, and was designed to represent different common hazard patterns: For datasets 2-4, we used a mixture of distributions to avoid the dataset's characteristics being driven by a single model.
Our intention was not to perform a simulation study. Simulation studies are useful tools for quantitatively evaluating the performance of statistical methods under certain scenarios [13]. In contrast, the aim of this study was to explore the qualitative behaviour of interval estimates arising from different survival models, and how these depend on model choice.
Case Study: Model Fitting and Analysis
We analysed the datasets assuming no knowledge of the distributions from which they were generated. We followed standard modelling practice by producing visual summaries of the data as part of an exploratory data analysis [14,15]. We used two approaches to visualise the empirical hazard function: (1) smooth estimates of the empirical hazard over time based on kernel density smoothing, and (2) unsmoothed estimates using piecewise time periods. We used the functions muhaz and pehaz from the muhaz package [16] in R to generate the smoothed and unsmoothed versions, respectively (the number of piecewise time periods was 25 based on default options). The advantage of examining both of these empirical estimates of the hazard function is that the smoothed estimates are expected to capture the underlying shape of the hazard function represented by the sample data, whilst the unsmoothed versions highlight the variability in the data.
We fitted each of the models in Table 1 to each of the four datasets using the flexsurv package in R [12]. We then used each of the seven models to extrapolate hazard and survival functions for a lifetime. We used the multivariate normal method (the default approach in the flexsurv package) to generate 95% confidence intervals for the estimated hazard and survival functions. We used visual goodness of fit to identify a candidate set of plausible extrapolation models. We calculated estimates of mean survival and the uncertainty in these estimates for the candidate models, as these are an important summary measure in cost-effectiveness analyses.
We also performed a hypothetical cost-effectiveness analysis. This used the increasing hazards dataset (to reflect the impact of ageing), and a two-state "well", "dead" Markov model, with utility values of 1 and 0, respectively. We used hazard estimates from the candidate models to represent outcomes for a control treatment, assuming it would cost £100 every 2 weeks. We also assumed the intervention treatment would have a hazard ratio of 0.75 (applied directly to the hazard estimates) and cost an additional £100 every 2 weeks. We used a lifetime horizon of 10 years, with weekly cycles. The cost-effectiveness measure used was the incremental cost per quality-adjusted life-years gained. The probabilistic sensitivity analysis used 1000 samples. Figure 1 provides the characteristics of the four representative datasets, showing the Kaplan-Meier survival function for each dataset, and the smooth and piecewise estimators of the hazard function. Figure 1 also includes 95% confidence intervals: for the survival functions these are based on Greenwood's formula [6] and for the hazard estimates these are obtained via bootstrapping, as analytical formulae are not available. Figure 1 demonstrates that the characteristics of the datasets are as expected. Figure 2 provides the seven model-based estimates of the hazard function with 95% confidence intervals. As the hazard function is bounded below by zero, confidence intervals cannot fan out indefinitely. Instead, the logarithm of the hazard (which is not bounded) is displayed. Table 2 provides estimates for selected time periods. The exponential distribution assumes a constant hazard at all time-points. Hence, it only provides a good visual fit to the flat hazard dataset (see Fig. 2, first column). We also observed a poor visual fit for the Gompertz model for both the unimodal and decreasing hazard datasets. For the decreasing hazard dataset, we also observed a poor fit for the log-normal and log-logistic models.
Results
Of the remaining candidate models, the width of confidence intervals always decreased during the extrapolated phase for the log-logistic model. For all other models, there was an increase in the interval width, although this was generally slight for both the log-normal and the gamma distributions. For the flat hazard dataset, all seven models provide visually good fits to the observed data. The exponential, Weibull and Gamma models all extrapolate a (near) constant hazard, whilst the remaining models extrapolate a decreasing hazard. If external evidence or clinical opinion was available to inform the likely long-term behaviour of the hazard (constant or decreasing), this could be used to reduce the set of candidate models to at most three or four models. The choice between the remaining models may then be informed by the behaviour of the extrapolated hazard. For example, of the constant hazard extrapolations, estimates of uncertainty from the Weibull model are the closest to reflecting increasing uncertainty over time. If it is not possible to choose between constant and decreasing hazard models, then the Gompertz model may be preferred as the only model for which the uncertainty in extrapolations includes the possibility of both constant and decreasing hazards. Similar remarks hold for the other datasets. For example, given the variety in the plausible long-term extrapolations arising from the increasing hazards dataset, all of the models appear to underestimate extrapolation uncertainty, with the potential exception of the generalised gamma. Figure 3 provides graphs of the estimated survival functions over time and 95% confidence intervals on the logit scale to make them unbounded. It is easier to interpret the long-term behaviour of the models from the hazard plots (for example, from the survival plots, it is not clear which models are extrapolating a constant hazard for the flat hazard dataset). The visual lack of fit of the models is also generally easier to interpret from the hazard plots. Note that when using the Gompertz distribution with a decreasing hazard, the extrapolated survival function will not reach zero (that is, it estimates that a proportion of individuals will never die). Figure 4 displays estimates of lifetime mean survival for the candidate models. The results demonstrate that model choice influences not only the point estimates of mean survival but also the uncertainty about these estimates. For the flat hazard dataset, the estimated standard error in the mean survival arising from the Gompertz model (0.36) is almost seven times larger than the estimate arising from the exponential model (0.05), and about three times larger than the estimates from the log-logistic and log-normal models (0.12 and 0.13, respectively), which provide similar point estimates of mean survival. For the increasing hazard dataset, this difference in the estimate of uncertainty is reversed, with estimated standard errors from the log-logistic and lognormal models (both 0.04) being almost twice those from the Gompertz model (0.02). Appendix 2 of the ESM provides the summary cost-effectiveness results. There was substantial variation in the estimates of the mean incremental cost-effectiveness ratios from the six candidate models (from £18,500 to £29,600, both per quality-adjusted life-year) and their associated uncertainty, with the widths of the confidence intervals ranging over threefold, from £4400 to £14,500. Even when models provided near-identical point estimates (£29,500 and £29,600 for the Weibull and generalised gamma, respectively), there remained large variation in the width of confidence intervals (£8400 and £14,500 respectively). For any given model, the expected value of information, which quantifies how much it would be worth spending on further research to reduce uncertainty in the cost-effectiveness results, was very small for a number of willingness-to-pay values. Appendix 2 of the ESM displays the results for a willingness to pay of £20,000 per quality-adjusted life-year gained. At this level, the funding decision would be yes for the log-normal and log-logistic models, but no for the remaining models. Despite this, the expected value of information per person was £0 for the gamma, Weibull and Gompertz models, and between £0.09 and £2.04 for the remaining models. This suggests that extrapolation uncertainty is not appropriately captured, as reducing this uncertainty could change the choice of survival model and hence the funding decision. Appendix 2 of the ESM provides further remarks.
Collectively, these results demonstrate that the effects of model choice on uncertainty in both the hazard functions and lifetime mean survival may be substantial, even for models that provide similar point estimates. Hence, analysts could under-or over-estimate the uncertainty in mean survival and hence measures of cost effectiveness unless they carefully consider model selection, in terms of both the model fit during the observed period and quantifying the uncertainty during the extrapolation period.
Discussion
To our knowledge, this is the first study to examine systematically the properties of seven different commonly used parametric survival models in terms of the uncertainty in estimates of extrapolated hazard and survival functions. We have provided exact analytical expressions for the exponential model and described the use of the delta method and the multivariate normal method for obtaining approximate expressions. Using the four hypothetical datasets, we illustrated how the choice of parametric survival model can strongly affect estimates of uncertainty about the hazard over the extrapolation period, and hence mean survival and costeffectiveness estimates. For each of the datasets considered, long-term uncertainty in the estimated hazard functions could be constant, increasing or decreasing, depending on the chosen model. We observed substantial differences in the estimated magnitude of uncertainty for estimates of the hazard function, lifetime mean survival and cost-effectiveness estimates.
Our findings are generalisable and applicable to datasets beyond the four used in this study. We have covered a range of commonly observed hazard patterns. Results will be qualitatively the same for other datasets that have similar hazard patterns because of the underlying mathematics that defines the estimated variance in the hazard for a given model. The magnitude of estimates of uncertainty will vary depending on the actual dataset used, but we would expect, for example, that the uncertainty in the hazard of a fitted generalised gamma model may fan out over time whereas that for a loglogistic is likely to narrow over time.
There is existing guidance from the National Institute for Health and Care Excellence Decision Support Unit and in the literature on analysing and extrapolating survival data in cost-effectiveness analyses, which focus on commonly used parametric survival models [2,17]. This guidance does not discuss the implications of survival model choice on estimates of uncertainty in model functions. A recent discussion on methodological challenges noted that extrapolation involves methodological, structural and parameter uncertainty, and that uncertainty increases as the extrapolated period increases [18]. Our study shows that survival model choice fundamentally influences the estimates of uncertainty in hazard, mean survival and cost effectiveness.
There were some limitations of this work. First, we only examined seven commonly used parametric survival models [2]. There are other models that could be applied, as well as more flexible models such as spline-based models and fractional polynomials [19][20][21][22]. Further research into the impact on extrapolation uncertainty of using these models would be beneficial. As noted, six of the seven models that we considered are nested members of the generalised F family [23]. In theory, it may be possible to fit the generalised F model and use significance testing to check if one of the nested models is to be preferred. There are two potential issues with this approach: first, we were not always able to obtain model estimates from the generalised F, secondly, some of the nested models occur as parameters tend to infinity: model testing in this case is not straightforward [24]. Another limitation is that we did not consider using a piecewise modelling approach, which allows for the data-generating mechanism to be different over time [25]. However, it would not automatically ensure (as might be preferred) that uncertainty increases as the extrapolated horizon increases: this depends on the chosen survival model. Additionally, fitting the extrapolating model to a subset of the sample data leads to a reduced sample size, and estimates of cost effectiveness can be sensitive to the choice of subset [26]. Further, we did not consider a dataset with multiple turning points in the hazard. In practice, it is important that model choice involves input from clinical experts [2,27]. This includes understanding both the underlying disease process (data-generating mechanism, or 'true' model) and how it evolves over time. The lack of data in the extrapolation period can create uncertainty in the appropriateness of using the fitted model for extrapolation. For example, Davies and colleagues [28] extrapolated survival estimates for two interventions from Weibull models fitted to 8 years of registry data. For one intervention, the model provided accurate predictions for the 8 years, but gave markedly inaccurate predictions when compared with a longer follow-up of the registry data to 16 years. This demonstrates that models that accurately describe the observed data may not provide accurate extrapolations. Hence, it is important to reflect any external evidence (including clinical knowledge) about the possibility that the data-generating mechanism will remain the same in the future. It is likely that there will be uncertainties in any external evidence, thus it is unlikely that their use will fully remove the uncertainties associated with extrapolation.
The results of this study have implications for a health economic analysis. Failure to quantify appropriately uncertainty about inputs, including survival functions, over the observed and extrapolated periods may lead to incorrect estimates of population mean costs and benefits, which may affect reimbursement decisions. As well as affecting estimates of mean cost effectiveness from a probabilistic sensitivity analysis, the choice of survival model will also affect the estimated probability that interventions are cost effective. The results of this study also suggest that the failure to adequately account for extrapolation uncertainty can lead to value of information estimates that are too low.
In Box 1, we outline a set of recommendations for analysts and decision makers who are involved in generating or critiquing extrapolations. These recommendations aim to complement existing guidance (2,12). We emphasise that considering estimates of uncertainty is important as a component of the extrapolation process.
An important implication for further methodological research is to develop methods on how to incorporate the Fig. 4 Estimates of lifetime mean survival and uncertainty (95% confidence interval) for seven commonly used statistical time-to-event models studied in four hypothetical datasets notion that interval estimates of hazard functions should 'fan out' during the extrapolated period. A general approach to characterising extrapolation uncertainty may be required to reflect that we have less knowledge about the data-generating mechanism in the future. A Bayesian approach would provide the ability to both incorporate external information and make probabilistic statements about the parameters of a survival model, taking into account the correlations between these parameters. This external information could include elicited beliefs from clinical experts about survival during the extrapolated period, or the plausibility of different models. Model discrepancy terms can be used to characterise uncertainty in model estimates [29]. An existing case study successfully demonstrated that it is possible to incorporate model discrepancy terms within the extrapolation period with the specific aim of inducing a fanning out of uncertainty in hazard estimates [19]. Further research into this approach should consider how to elicit both discrepancy terms and parameters in survival models [30]. Another advantage of the Bayesian approach is that it removes the need to use a multivariate normal approximation for the joint distribution of parameters in a survival model. Finally, for this work, we generated representative (hypothetical) datasets, but we did not conduct a simulation study. This was intentional, as the representative datasets were sufficient to highlight the impact of model choice on extrapolation uncertainty. Further research could include a simulation study, to quantify the properties of survival models during the extrapolated period.
Conclusions
It is important for cost-effectiveness analyses to include realistic estimates of uncertainty about hazard functions and mean survival. This will improve both the accuracy of, and confidence in, reimbursement decisions. The choice of extrapolating model can have a large impact on estimates of uncertainty about hazard functions and lifetime mean survival. As such, consideration of the plausibility of estimates of uncertainty about hazard estimates in addition to point estimates of the hazard, particularly during the extrapolated period, should be informed by clinical knowledge as part of the model selection process. To support this, it is useful to visualise the observed and modelled hazard estimates as shown in the case study examples in this article. We provide seven new and specific recommendations for analysts and decision makers to follow when considering the uncertainty in the extrapolated period and the impact of parametric survival model choice.
Funding Ben Kearn's time was funded by the National Institute for Health Research Doctoral Research Fellowship (DRF-2016-09-119) 'Good practice guidance for the prediction of future outcomes in health technology assessment'. The motivation for this paper arose from research work undertaken by Alan Brennan, John Stevens and Shijie Ren for the Medical Research Council (Grant number G0902159) 'Methods of extrapolating RCT evidence for economic evaluation'.
Compliance with Ethical Standards
Conflict of interest Ben Kearns, John Stevens, Shijie Ren and Alan Brennan have no conflicts of interest that are directly relevant to the content of this article.
Ethics approval Ethics approval was not required.
Informed consent Informed consent was not required.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creat iveco mmons .org/licen ses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
|
2019-11-25T16:14:42.036Z
|
2019-11-25T00:00:00.000
|
{
"year": 2019,
"sha1": "bc8fc1f2f6a19209cd25e2c7e274992994d75616",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s40273-019-00853-x.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "e4072cdcf91bf58e8adb3565ea0e6e7af1663560",
"s2fieldsofstudy": [
"Economics",
"Medicine"
],
"extfieldsofstudy": [
"Mathematics",
"Medicine"
]
}
|
245160315
|
pes2o/s2orc
|
v3-fos-license
|
Design and Implementation of a Tether-Powered Hexacopter for Long Endurance Missions
: A tether-powered unmanned aerial vehicle is presented in this article to demonstrate the highest altitude and the longest flight time among surveyed literature. The grid-powered ground station transmits high voltage electrical energy through a well-managed conductive tether to a 2-kg hexacopter hovering in the air. Designs, implementations, and theoretical models are discussed in this research work. Experimental results show that the proposed system can operate over 50 m for 4 h continuously. Compared with battery-powered multicopters, tether-powered ones have great advantages on specific-area long-endurance applications, such as precision agriculture, intelligent surveillance, and vehicle-deployed cellular sites.
For several applications, such as precision smart agriculture, intelligent surveillance, and temporary telecom hotspots, for emergencies which require UAV working within a specific area for a long time, tether-powered UAVs have a great potential to demonstrate much longer endurance than battery-powered UAVs.
A tether-powered UAV usually contains a multicopter in the air and a ground station, with a conductive tether connecting both of them [16,17]. The electrical power is transmitted form the ground station to the multicopter. Tether-powered multicopters are able to extend their endurance to "near infinite" since the ground station is powered by the grid, a fuel generator, or a huge battery that contains much more energy than can be lifted in the air. In previous researches, a tether-powered multicopter can either be directly powered by a tether [16,18,19] or through an additional electricity converter installed onboard [17,20,21]. Usually, one with an electrical converter is able to achieve longer tether length than those being directly powered by a tether. Tether-powered multicopters can operate with only small batteries onboard or no battery at all. Onboard batteries only serve as a backup power to prevent a possible multicopter crash in case of power failure [22,23]. Several researches on the power tether can be found in [18,19,21,24]. Our previous work provides a math model to optimize tether parameters for different missions [18]. A power-over-tether system is presented to control the tether tension automatically [19]. A tether structure that This research work focuses on the system design, tether selection, and outdoor implementation. A 4-h mission with the operation altitude up to 50 m will be executed to demonstrate the longest endurance and the highest altitude among surveyed experiments about tether-powered UAVs. The structure of this paper is organized as follows. Section 2 introduces the composition of our tether-powered hexacopter, detailed design parameters, power consumption model, tether selection, and design considerations. Section 3 describes experimental results. Section 4 describes the analysis of test results. Section 5 presents the conclusion of this article and improvements that can be done in the future.
Design and Analysis
The system diagram and photos of the proposed tether-powered hexacopter are shown in Figure 1. This system can be divided into two subsystems, the ground station and the aerial vehicle with a mission payload. Both subsystems are connected physically by a tether which transmits electrical power form the ground station to the hexacopter. Navigation signals and telemetry data are transmitted wirelessly through the off-the-shelf radio modules. Flight data are recorded in the flight controller and will be read out and analyzed after every experiment.
Ground Station
The ground station contains three DC power supplies connected in series, a slip ring, a winch, a tether guider, wireless communicators, and a laptop computer which serves as a mission planner. This ground station can be powered by 220 Vac from the grid or a diesel generator.
Two 750 W/48 V DC power supplies (RSP-750-48, Mean Well, Taiwan) and a 1500 W voltage adjustable DC power supply (KXN-3050D, Zhaoxin, China) are capable of providing maximum 15.7 A current and maximum 120 V DC voltage to the slip ring (M022A-06, Senring, China), which conducts electricity to the tether when the winch is rotating. The winch is controlled by a 100 W industrial AC servo motor (FRMS1020604D, Hiwin, Taiwan) under a constant torque mode to maintain a suitable tension of the tether. The torque reference is set according to the status of flight. The angular speed of the winch is limited under 200 rpm to protect the slip ring. A tether guider is installed in front of the winch to help tether accumulate uniformly on the spool. The winch controls are detailed in Appendix A. Telemetry data is received by a laptop and decoded by software "Mission Planner". The hexacopter is navigated manually through a radio transmitter.
Hexacopter
The hexacopter subsystem is built with off-the-shelf components listed in Table 2. Its airframe has 2000 g weight and it has the ability to carry a 1500 g payload because the heavy battery is omitted. The payload of the following experiments is an action camera (Hero3+, GoPro) with 170 g weight. Its mission is to periodically take videos/photos of the target site which will later be turned into a time-lapse video. The payload module can be changed to others according to mission needs.
Instead of being powered by a large battery, the hexacopter is powered by a 600 W custom-built DC-DC converter (based on LTC3871, Analog Devices) which receives high voltage from the tether and delivers a controlled constant 24 V to every component on the hexacopter. The details of the DC-DC converter, including its topology, efficiency, and the feedback control circuit are supplemented in Appendix B.
As shown in Figure 2a, an aluminum heatsink and two fans dissipate the heat generated by the DC-DC converter. Figure 2b is the thermal image of the DC-DC convertor providing 30 A at 24 V. The hottest region occurs at the location of high side MOSFETs at 86.8 °C, which is within their safe range.
A 36 Wh battery serves as a backup power which provides around 4 min of flighttime to land the hexacopter safely in case of emergency due the malfunction of the power supply chain. This backup battery is pre-charged to 24.1 V and then directly parallelconnected to the DC bus, which is internally controlled at 24 V. After turning on the system, the backup battery gradually discharges to 24 V and then always stays with the DC bus in the following. The experimental data show that the DC bus voltage runs between 23 to 24 V, which will not damage the battery.
Hexacopter
The hexacopter subsystem is built with off-the-shelf components listed in Table 2. Its airframe has 2000 g weight and it has the ability to carry a 1500 g payload because the heavy battery is omitted. The payload of the following experiments is an action camera (Hero3+, GoPro) with 170 g weight. Its mission is to periodically take videos/photos of the target site which will later be turned into a time-lapse video. The payload module can be changed to others according to mission needs.
Instead of being powered by a large battery, the hexacopter is powered by a 600 W custom-built DC-DC converter (based on LTC3871, Analog Devices) which receives high voltage from the tether and delivers a controlled constant 24 V to every component on the hexacopter. The details of the DC-DC converter, including its topology, efficiency, and the feedback control circuit are supplemented in Appendix B.
As shown in Figure 2a, an aluminum heatsink and two fans dissipate the heat generated by the DC-DC converter. Figure 2b is the thermal image of the DC-DC convertor providing 30 A at 24 V. The hottest region occurs at the location of high side MOSFETs at 86.8 • C, which is within their safe range.
Tether Optimization
The maximum range or altitude of a tether-powered multicopter is limited by the tether length and the power that can be used. Considering the condition where the hexacopter hovers right above the ground station, the length of tether is equal to the altitude. The power needed by the hexacopter, , for hovering in the air can be expressed as Equation (1).
where η represents the efficiency of the DC-DC convertor of PSM and is fixed to 0.9307 as the worst case. The symbol n, which is 6 for a hexacopter, represents the quantity of rotors. The function f( ) represents the thrust-to-power function of a single rotor where the input unit is gram-force and the output unit is Watt. represents the weight in the air. and represent the power consumed by avionics and the payload, respectively. The thrust-to-power function used in this research work is acquired from our test bed A 36 Wh battery serves as a backup power which provides around 4 min of flight-time to land the hexacopter safely in case of emergency due the malfunction of the power supply chain. This backup battery is pre-charged to 24.1 V and then directly parallel-connected to the DC bus, which is internally controlled at 24 V. After turning on the system, the backup battery gradually discharges to 24 V and then always stays with the DC bus in the following. The experimental data show that the DC bus voltage runs between 23 to 24 V, which will not damage the battery.
Tether Optimization
The maximum range or altitude of a tether-powered multicopter is limited by the tether length and the power that can be used. Considering the condition where the hexacopter hovers right above the ground station, the length of tether is equal to the altitude. The power needed by the hexacopter, P n , for hovering in the air can be expressed as Equation (1). where η represents the efficiency of the DC-DC convertor of PSM and is fixed to 0.9307 as the worst case. The symbol n, which is 6 for a hexacopter, represents the quantity of rotors. The function f( ) represents the thrust-to-power function of a single rotor where the input unit is gram-force and the output unit is Watt. W I A represents the weight in the air. P AV and P PL represent the power consumed by avionics and the payload, respectively. The thrust-to-power function used in this research work is acquired from our test bed described in [18]. The curve fitting with the momentum theory [25] can be described by Equation (2).
where F r is the thrust force produced by the rotor, η r is a figure of merit of the rotor, ρ a is the density of air, and A r is the swept area of the propeller. The test data are shown in Figure 3, and the 3/2 order fitted regression becomes Equation (3).
where is the thrust force produced by the rotor, is a figure of merit of the rotor, is the density of air, and is the swept area of the propeller. The test data are shown in Figure 3, and the 3/2 order fitted regression becomes Equation (3).
A none zero constant 1.95 in Equation (3) implies the static power loss of ESC when the rotor is not rotating.
can be represented as where is the takeoff weight of the hexacopter, d is the tether's weight of per unit length, H is the length of the tether, and is the weight of tether in the air. The power submitted by the ground station, , can be expressed as where and represent voltage and current submitted by the ground station respectively. The symbol ρ is the tether's resistivity per unit length thus the power consumed by the tether can be represented as . is the residual power that the hexacopter can receive. The maximum power that can be transferred from the ground station to the hexacopter is limited by the minimum allowable input voltage of the DC-DC converter ( | | , With adequate rearrangements, Equations (5) and (6) can be derived as Equation (7), which expresses the power that can be received by the hexacopter, . A none zero constant 1.95 in Equation (3) implies the static power loss of ESC when the rotor is not rotating.W I A can be represented as where W TO is the takeoff weight of the hexacopter, d is the tether's weight of per unit length, H is the length of the tether, and dH is the weight of tether in the air. The power submitted by the ground station, P G , can be expressed as where V G and I G represent voltage and current submitted by the ground station respectively. The symbol ρ is the tether's resistivity per unit length thus the power consumed by the tether can be represented as I G 2 ρH. P r is the residual power that the hexacopter can receive. The maximum power that can be transferred from the ground station to the hexacopter is limited by the minimum allowable input voltage of the DC-DC converter (V DCH,MI N ). For most DC-DC converters, an under voltage trip is likely to occur if input voltage (V DCH ) drops below V DCH,MI N . This limitation can be expressed as With adequate rearrangements, Equations (5) and (6) can be derived as Equation (7), which expresses the power that can be received by the hexacopter, P r .
A long tether containing copper filaments has a significant weight. The lightest option in the market is the 44A012X series (TE connectivity, USA) unshielded, unjacketed, general purpose, 600 V, two conductor 1-pair twisted cable. The weight and resistivity per unit length are listed in Table 3. Substituting these parameters into the above mathematical model, we have Figure 4 showing the overall performance of different tether size in American Wire Gage (AWG). The solid lines, derived by Equation (1), indicate the needed power versus the tether's length. Their slope is positive because more power is needed for hanging a heavier tether in the air. On the other hand, the dash lines, derived by Equation (7), indicate the received power versus the tether's length. Their slope is negative because more voltage is dropped due to the higher resistance. For a certain tether size, the junction of the solid line and the dash line indicates the maximum altitude and its corresponding power, which are summarized in Figure 5.
Margins for Safety
For the proposed system, Figure 5 shows that the optimum tether size is 12 AWG which achieves 103 m theoretical maximum altitude with 1723 W power requirement; however, such a PSM is too heavy to be lifted by the hexacopter. The nominal power
Margins for Safety
For the proposed system, Figure 5 shows that the optimum tether size is 12 AWG which achieves 103 m theoretical maximum altitude with 1723 W power requirement; however, such a PSM is too heavy to be lifted by the hexacopter. The nominal power output of our PSM is 600 W. Considering an unideal condition with gusts, sidewind, and thermal drift of the mathematical model, we set the power limit at 500 W, the dotted line in Figure 5. Under the safety margin, 20 AWG tether is chosen and equipped for the following experiments. The tether's length should not exceed 69 m, the theoretically achievable boundary.
Experimental Results
Two types of tests, namely a functional test and an endurance test, have been executed. Parameters of each test are listed in Table 4. In the functional test, the hexacopter will reach the altitude of 15 m and fly for 30 min. The objective of the functional test is to confirm the whole functionality of the system, rationale of test procedures and gather flight data to verified design parameters. The goal of the endurance test is to prove that the system is capable of "near infinite" air time. The hexacopter will reach the altitude of 50 m and fly for 240 min (4 h) or more to prove such a feature. All experiments were performed outdoors on the campus lawn.
Functional Test
Log of functional test is shown in Figure 6. The hexacopter has reached average altitude of 17 m and maximum altitude of 21 m. The PSM provides average 12 A current, 23.5 V voltage, and equivalent to 282 W power consumption. The mission of 50 min has been completed successfully and the functionality of the whole system can be confirmed.
Functional Test
Log of functional test is shown in Figure 6. The hexacopter has reached average altitude of 17 m and maximum altitude of 21 m. The PSM provides average 12 A current, 23.5 V voltage, and equivalent to 282 W power consumption. The mission of 50 min has been completed successfully and the functionality of the whole system can be confirmed.
Endurance Test
Log of Endurance test is shown in Figure 7. The hexacopter reached average altitude of 48 m and maximum altitude of 59 m. The PSM provides average 17.9 A current, 23.5 V voltage, and is equivalent to 421 W power. The power requirement in the endurance test is more than in the functionality test because the length is much longer. The tether generates significant heat while conducting power from the ground station to the hexacopter. The side wind is also stronger at higher altitudes; therefore, the hexacopter
Endurance Test
Log of Endurance test is shown in Figure 7. The hexacopter reached average altitude of 48 m and maximum altitude of 59 m. The PSM provides average 17.9 A current, 23.5 V voltage, and is equivalent to 421 W power. The power requirement in the endurance test is more than in the functionality test because the length is much longer. The tether generates significant heat while conducting power from the ground station to the hexacopter. The side wind is also stronger at higher altitudes; therefore, the hexacopter needs more power to stabilize itself. The mission of 240 min has been completed successfully and thus the "near infinite" flight time can be demonstrated with a time-lapse video [26]. Among our surveyed research works so far, no battery powered UAV can fly for longer than 4 h. Averaged measurement data during the endurance test are listed in Table 5 along with those derived from the model in the previous section. In the endurance test, total tether length in air is 60 m and data are derived based on this length. Averaged measurement data during the endurance test are listed in Table 5 along with those derived from the model in the previous section. In the endurance test, total tether length in air is 60 m and data are derived based on this length.
Discussions
The 421 W power consumption was measured in an outdoor environment with moderate wind, evidenced by our time-lapse experiment video [26]. According to the historical weather records [27], the averaged wind speed was 2.75 m/s at 10 m height over the ground level. Therefore, the 50 m altitude wind speed was estimated at 5 m/s according to the boundary layer theory. Our system was tested at 8 m/s wind speed, and it worked normally. In the future, we will try to equip an anemometer with the UAV to measure the in-situ wind speed. The DC-DC convertor provides 600 W, which implies a 179 W (29.8%) spare. Furthermore, a 36 Wh battery serves as a backup power which provides 4 min of flight-time to land the hexacopter safely in case of an emergency due to the malfunction of the power supply chain. That implies the parallel-connected backup battery can provide extra 540 W power (totally 1140 W), which can satisfy the power requirement due to the strong wind shear and the movement required by the mission. A 3.88% power consumption difference between the theoretical model and measurements is observed. The model inaccuracy could be caused by reasons listed below: 1.
The extra forces caused by tether: The force caused by wind and tension on the tether may cause the hexacopter to produce extra thrust in order to balance itself; 2.
Imbalance of the hexacopter: Power supply and mission payload may cause an imbalance, thus requiring extra thrust for the hexacopter to maintain its altitude; 3.
Side wind: Since the hexacopter is set to hover at a fixed position throughout the test, side wind may cause it to produce more thrust than estimated in order to maintain its position.
Conclusions
The proposed tether-powered hexacopter demonstrates successful outdoor operation with an endurance of 4 h at a height of 50 m. This system achieves the longest flight time and the highest altitude among surveyed experiments. A mathematical model is also proposed to estimate the optimal tether's size and its theoretical length limit. The model error is 3.88% compared with the measured data. The reasons may be the extra force exerted by the tether, imbalance of the hexacopter, and the sidewind. The influences of above reasons can be considered in future works to increase model accuracy.
Acknowledgments:
The authors would like to thank Force Precision Instrument Co. Ltd., Taiwan for technical support.
Conflicts of Interest:
The authors declare no conflict of interest.
Appendix A. Details of Winch Control
To prevent tether from dangling and to realize "auto-rewind", the winch motor controller/driver (D2T0423-S, Hiwin, Taiwan) works on constant torque mode. Three flight scenarios namely "Ascend", "Hover and slowly descend" and "Fast descend" are arranged and different torque references are set accordingly. In "Ascend" scenario, the torque value is set to slightly negative so that the hexacopter can pull out tether effortless. In "Hover and low descend" scenario, a positive torque reference is set in order to prevent tether from dangling. In "Fast descend", a relatively greater torque is set so that the rewind speed can catch up hexacopter's descending rate. Torque values are preset into motor controller according to each scenario. The scenario is switched by the operator manually during flight. Figure A1. The schematic drawing of the DC-DC converter's printed circuit board. "HV" labels the high voltage input; "LV" labels the low voltage output. Before being utilized in the air, the DC-DC convertor was tested in the lab. The input voltage was 90 V and the output voltage was set to 24 V. The loading current was tuned from 8 A to 30 A. No component failure nor protection reaction was observed during the test. The efficiency is charted in Figure A3 with the efficiency value over 93% through the entire test. Before being utilized in the air, the DC-DC convertor was tested in the lab. The input voltage was 90 V and the output voltage was set to 24 V. The loading current was tuned from 8 A to 30 A. No component failure nor protection reaction was observed during the test. The efficiency is charted in Figure A3 with the efficiency value over 93% through the entire test. Before being utilized in the air, the DC-DC convertor was tested in the lab. The input voltage was 90 V and the output voltage was set to 24 V. The loading current was tuned from 8 A to 30 A. No component failure nor protection reaction was observed during the test. The efficiency is charted in Figure A3 with the efficiency value over 93% through the entire test.
|
2021-12-16T16:28:02.097Z
|
2021-12-14T00:00:00.000
|
{
"year": 2021,
"sha1": "c777067f74bfeacbf9fa0697bc31c65f8259cea5",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2076-3417/11/24/11887/pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "57fa031c2fc6a572799d0f81a42e2df8de5da824",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": []
}
|
259580121
|
pes2o/s2orc
|
v3-fos-license
|
Implementation of Cloud Computing Based on Infrastructure as a Service (IaaS) to Improve Transaction Quality (Case Study Shop of Central Mart Pekanbaru)
ABSTRACT
Introduction
The advancement of science and technology is currently accelerating quickly, and many connected areas are doing the same [1].Particularly in the modern era of communication, computers with both software and hardware components play a crucial role in assisting the development process [2].As a result, many organizations or businesses are attempting to replace their manual transaction system with a digital one [3].Cloud computing is a type of computer technology that uses the internet as its primary terminal to administer infrastructure and software as a service [4].According to the National Institute of Standards and Technology Eva Yumami, Irfansyah, M. Khairul Anam, Hamdani e-ISSN: 2622-1659 Jurnal Teknologi dan Open Source, Vol. 6, No. 1, June 2023: 86 -97 87 (NIST) cloud computing is an information technology model that is easy to use, can be accessed anywhere with computing resources that are quickly released with minimal effort on the part of management [5] [6].
The cashier is an important aspect in terms of transactions carried out by the buying and selling business parties [7].To improve performance, a computer-based system is needed to improve the performance of employees and services, especially cashiers who serve customer payment transactions [8].[9].In the cashier service, many manual transaction errors are caused by the absence of a system that can help cashiers complete transactions [10].Currently, there are still many cashier applications that are accessed using the local network or can only be accessed through devices connected to the same network.The weakness of this local network system is that business actors must provide a device that becomes a server for storing transaction data for the cashier application [11].This is, of course, very risky for data loss if the server device is damaged or even a physical loss of the device occurs.In addition, transaction reports cannot be monitored online if you are still using a local network.The cloud computing technology that is used with the services that are applied to this system is the Infrastructure as a Service (IaaS) service.
IaaS (Infrastructure as a Service) is a service that "rents out" basic information technology resources, which include storage media, processing power, memory, operating systems, network capacity, and others, which can be used by users to run their applications.[12] [13].Some of the advantages of IaaS are reduced capital costs because, with this service, there is no need to incur additional costs [6] to buy new computers or server equipment.The use of IaaS allows business actors to make adaptations in the form of increasing or decreasing resources quickly under certain conditions [14], [15] IaaS implementation has been carried out by previous researchers, such as [16], using the owncloud platform as a private cloud type and Nextcloud as a public cloud type to build hybrid cloud storage utilizing infrastructure as a service services to add storage capacity and not require extra costs.By applying the benefits of infrastructure as a service, we get 39% memory and CPU usage when uploading 3 data files measuring 300MB, 500MB, and 1024MB with 3 clients almost simultaneously.Another study [17] produced a cloud computing application for web-based server service providers using the Proxmox VE hypervisor with the IAAS service model.Based on some of these studies, this research will apply cloud computing-based technology to the cashier application system to improve transaction quality.So that the transaction process becomes better and more efficient.
Research Method
The research stages will discuss the flow of the methodology that will be carried out by researchers for implementing cloud computing in cashier applications to improve service quality.The following is the flow of the researchers as follows: The literature review is the initial step in the research process and serves as the basis for preparing a research report [18].The literature review in this study covers descriptions of IaaS-based cloud computing and other research materials gathered from reference materials to serve as the basis for activities in the research done.In the literature review, this study also incorporates reviews, summaries, and the author's thoughts on the different sources of literature used, such as articles, books, slides, information from the internet, etc., on the themes mentioned.
Identification of problems
The problem analysis is known from the results of the preceding literature review, which is the reference for researchers.References that appear will be investigated to determine ways to implement and complete them.Based on the results that became a reference from the literature review, it can be stated that the problem that has been found in this study is related to the deployment of cloud computing to improve services in the cashier application.The problems found are that currently there are still many cashier applications that are accessed using the local network or can only be accessed through devices connected to the same network.The weakness of this local network system is that business actors must provide a device that becomes a server for storing transaction data for the cashier application.This is of course very risky for data loss if the server device is damaged or even a physical loss of the device occurs.In addition, transaction reports cannot be monitored online if you are still using a local network.Figure2 is the schematic of the checkout.
Data collection
Based on the necessity for data collection, it is carried out to get the information needed to meet research objectives [19].The aim given in the form of a hypothesis is a temporary answer to the research question.This response still needs to be tested, and it is for this goal that data collection is essential.The data was obtained from a preset sample.The sample consists of a set of analysis units as research goals.In this study, interviews will be conducted following interview criteria.The data collection approach is carried out by conducting direct interviews with the leaders or shop owners to discuss the problem of the transaction system that is utilized in connection with the subject under study and gather objective data.The data collection carried out in this investigation was employing Observation Direct observation or observation of the store directly by looking at the current system. Interview interviews were done to find out the difficulties that arise in present systems or even do not exist at all.Observations were made by asking many questions that were posed directly by store owners in the Pekanbaru neighborhood.The points asked during the interview.
Design
The next stage is design, where modeling activities are carried out, starting from system modeling and architectural modeling to database modeling.System and architectural modeling uses Unified Modeling Language (UML) diagrams, which consist of use case diagrams, sequence diagrams, and so on [20].This study uses use cases to design this system.The use case is a model for the behavior of information systems that will be made [21].Use cases describe an interaction between one or more actors and the information system to be created [22].This diagram is important for organizing and modeling the behavior of a system that is needed and expected by users [23].Use Case Diagram describes the function, and the needs of the user's perspective [24].
. Use case diagrams
There are actors who can access use cases in the system, including actors as users.
Figure 3. Use case diagrams Table 1 is a description of Figure3.
Table 1.Description of the use case diagram Actor Explanation Cashier Can login to the application Can manage goods data on the website Can make sales transactions 2.5.Network topology and system network architecture Figure 4 is the IaaS infrastructure used in this study..The design of the new system to be built can be explained as follows: 1.The process of creating a virtual machine through Cloud computing through Azure IaaS 2. The installation and administration processes are carried out automatically by the service provider, in this case, Azure IaaS. 3. Virtual Machines that have been successfully built via IaaS are also equipped with security features, and their IP addresses have been determined (private IP and public IP).4. The client only needs to set up the web server and applications on the operating system of the IaaS virtual machine that has been built.
Application of cloud computing to cashiers using Microsoft Azure.The first stage of this research is to install and configure IaaS.Then implement the cashier system and conduct trials.The security network on the built virtual machine describes the Inbound Security Rules and Outbound Security Rules.The appearance can be seen in Figure 8.
Result and Discussion
The IaaS overview page display can be seen in Figure 5 below:
System Testing
The final stage of this research is testing the system that has been designed and built.The hardware specifications used can be seen in Table 4.1 below: The software used can be seen in Table 3 below: Azure Infrastructure as a Service (IaaS) is a cloud-based service that offers many advantages, including a free version available for students and a strong level of security [25].Azure allows developers to build applications in their choice of languages, including.NET, Java, and Node.js, and then gives them access to tools such as Visual Studio.This allows developers to stay productive while concentrating on the coding rather than managing it.The following is a The web server that I use is Apache because this web server software is free and open-source which allows users to upload websites on the internet.Meanwhile, the DBMS used is MySQL because apart from being open source, MySQL is also a database server that is free with the GNU General Public License (GPL) so that it can be used for personal or commercial purposes without having to pay for the existing license.
Based on the results of implementing the new system, researchers will conduct interviews with the owner of the Central Mart Store, Mr. Musaat Zaki, on July 28 2022 at Jl Melati -Pekanbaru.The following is a list of questions and answers from the sources: 1. How do you respond after the cashier system can be accessed online?
Answer: I don't have to worry about my data being lost if the computer equipment in my shop is damaged or lost.2. What convenience do you get after the cashier application is online, right?
Answer: The application makes it easy for me to access via my cellphone even when I'm outside the store.3. DWhat results do you see after this system is made online?
Answer: The transaction monitoring process is easier and faster.So that I can quickly find out the stock of goods and my store's financial reports.4.After going online, how easy is it for you to improve the quality of transactions?Jawaban: Dengan mudahnya saya dalam mengontrol stok secara online, saya dapat segera memenuhhi stok barang yang akan habis sebelum barang tersebut kosong.5. What changes have you encountered in storing data using the cloud?
Answer: Previously, all transaction data was on the same device as my application, now the device in my store only functions as a device to run the application.
From the description of the interview above, it can be concluded that applications using cloud computing can make it easier for businesses to manage their business applications and finances because they do not need to think about the cost of server equipment.In addition, the application can be accessed online which makes it easier for businesses to monitor their store stock.
Conclusion
Based on the results described in the previous chapter, several conclusions can be drawn, as follows: 1. Azure IaaS can be an option for building cloud computing because it has been proven to be easy to use and powerful.2. Existing public IP addresses can be accessed using the internet without any significant obstacles.
3. With this IaaS service, it can help or make it easier for store admins to access this application remotely (online).
For the improvement and development of the application system that the author has built, the author suggests that this research has several suggestions that can be used as a basis for future research, such as: 1. Subsequent research is to provide additional domains on IaaS public IPs to make it easier for users to remember when they are accessed.2. ncrease the type of service used so that you can determine the public IP that will be used for IaaS access yourself.3. To increase security, it is necessary to install SSL Source Socket Layer on the domain or public IP.
Figure 4 .
Figure 4. IaaS infrastructure Figure 5. Overview IaaS operating system used on the test device 2 6.AndroidThe mobile device used in the test device 13.2.1.Laptop Equipment 1This device is simulated as a device used by cashier 1 with the device IP address 192.168.100.96.The IP address can change depending on the connection used.But when testing is done, the device gets the IP as above.
Figure7.Figure 8 . 2 .Figure 9 .Figure 10 .Figure11. Purchase list report 5 . 3 .
Figure7.Login Page on Device 2 (As Admin) 1.View the Admin page.After the admin has successfully logged in, the page will display as shown in Figurebelow:
Figure 16
Figure 16 Home Page Azure IaaS 2. Web Server dan Database Management System (DBMS)The web server that I use is Apache because this web server software is free and open-source which allows users to upload websites on the internet.Meanwhile, the DBMS used is MySQL because apart from being open source, MySQL is also a database server that is free with the GNU General Public License (GPL) so that it can be used for personal or commercial purposes without having to pay for the existing license.Based on the results of implementing the new system, researchers will conduct interviews with the owner of the Central Mart Store, Mr. Musaat Zaki, on July 28 2022 at Jl Melati -Pekanbaru.The following is a list of questions and answers from the sources:
Table 2 .
Testing Hardware
Table 3 .
Software used
|
2023-07-11T18:32:38.399Z
|
2023-06-22T00:00:00.000
|
{
"year": 2023,
"sha1": "a01cb2225572c872950d6e186dee6d61785047ba",
"oa_license": "CCBYSA",
"oa_url": "https://ejournal.uniks.ac.id/index.php/JTOS/article/download/3127/2391",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "6e2d0643ee1f0d0b548262e36a9635e758c0c8e0",
"s2fieldsofstudy": [
"Computer Science",
"Business"
],
"extfieldsofstudy": []
}
|
244443195
|
pes2o/s2orc
|
v3-fos-license
|
Treatment of Severe Ptosis by Conjoint Fascial Sheath Suspension
Objective. To explore the role of conjoint fascial sheath (CFS) suspension in the treatment of severe ptosis.Methods. A total of 110 patients with severe ptosis who were admitted to our hospital from May 2018 to December 2020 were included. Fifty-seven patients treated with frontalis suspension were assigned into group A, and the remaining 53 patients treated with CFS suspension were assigned into group B. The curative effect, ocular surface alterations, complications, and satisfaction in the two groups were compared. Results. Patients in group B suffered from severe upper eyelid retraction and lid lag than those in group A, as well as more limited range of motion (ROM) (P < 0:05). The curative effect and patient satisfaction in group B were higher than those in group A (P < 0:05). Postsurgical complications in group B were fewer than those in group A (P < 0:05). Conclusion. CFS suspension is effective in the treatment of severe ptosis.
Introduction
Blepharoptosis is common in ocular plastic surgery and may be induced by multiple mechanisms, for example, congenital ptosis caused by low function of fibroadipose tissue in levator palpebrae superioris (LPS) muscle, myogenic ptosis caused by dysgenesis-induced weakness of LPS muscle, and neurogenic ptosis caused by complete or partial loss of cranial nerve III [1,2]. Blepharoptosis refers to the drooping of either or both sides of the upper eyelid, resulting in narrow palpebral fissure and covering the eyes [3], which may also be associated with other eye diseases or systemic diseases [4,5]. Aponeurosis repair and levator myectomy are preferred options for its treatment. Frontalis suspension, a common surgical treatment for patients with severe ptosis and poor levator function [6], establishes a connection between frontalis and tarsus, thus correcting the position of eyelid through the elevatory force of the frontalis [7]. However, it cannot fully meet the normal physiological requirements and is commonly associated with postoperative keratitis, and vulnerable patients are prone to corneal complications [8]. Conjoint fascia sheath (CFS) has been histologically confirmed to be a kind of fascial tissue membrane with elasticity and toughness. It is widely used in ptosis correction by connecting the special muscle sheath of the levator in the CFS with levator muscle to suspend eyelid [9]. This study is aimed at exploring the role of CSF suspension in the treatment of severe ptosis.
General Data.
A total of 110 patients with severe ptosis who were admitted to our hospital from May 2018 to December 2020 were included. Fifty-seven patients treated with frontalis suspension were assigned into group A, and the remaining 53 patients treated with CFS suspension were assigned into group B. 2.3. Methods. Patients in group A underwent frontalis suspension: two to three drops of tetracaine gel were used for topical anesthesia, and 20 g/L lidocaine was used for subcutaneous and subconjunctival infiltration anesthesia. Skin and subcutaneous tissue were incised to expose orbicularis oculi, and the frontalis muscle was separated through an incision above the eyebrow arch. A tunnel was made on each pedicel of muscle flaps through a 5 mm incision, and mattress sutures of two muscle flaps were pull out from the eyebrow incision through the tunnel. The frontalis muscle and subcutaneous tissue were bluntly dissected upwards to 15-20 mm above the eyebrow arch, with a width of 25-35 mm. Figure 1: Comparison of upper eyelid retraction after surgery. Upper eyelid retraction length in group B is shorter than that in group A at 1 month and 3 months after surgery (P < 0:05). * P < 0:05 vs. group A. Figure 3: Comparison of lid lag after surgery. Lid lag in group B is lower than that in group A at 1 month and 3 months after surgery (P < 0:05). * P < 0:05 between the two groups. BioMed Research International
Exclusion and Inclusion
The frontalis muscle and periosteum were separated to the same plane as subcutaneous separation layer, separating the frontalis muscle from the skin at the top and separating the frontalis muscle from the periosteum at the bottom. The inner, middle, and outer points were fixed at the anterior one-third of tarsus. Curvature and height of sutures were adjusted to ensure the normal head-up of patients, and the height of palpebral fissure was controlled to ensure the complete separation of eyelid and eyeball. Afterwards, the incision was sutured to form a double eyelid.
Patients in group B were treated with CFS suspension: all patients were supine anesthetized in the same way as group A. They were operated under the microscope. The marking line was designed, and eyelid infiltration anesthesia was carried out. Skin was cut along the line, and orbicular muscle at the lower edge of the incision was removed to expose tarsus. The incision was separated upward to 5 mm above the fornix along the space between Muller's muscle and levator aponeurosis, in order to fully expose CFS. Three pairs of mattress sutures were made with 5-0 absorbable suture to fix CFS at the anterior one-third of tarsus so that the upper eyelid margin of the affected eye was located at the upper edge of the cornea when looking straight ahead in a sitting position. Suturing height was adjusted to make the margin of eyelid smooth and natural. 5-0 silk thread was used to lift levator aponeurosis, and the incision was sutured intermittently.
Outcome Measures
2.4.1. Corrective Effect Assessment [11]. Upper eyelid located 1~2 mm below the upper corneal margin was considered to be well corrected; upper eyelid located at or above the upper corneal margin was considered to be overcorrected; upper eyelid located >2 mm below the upper corneal margin was considered to be undercorrected; no changes in the position of upper eyelid were considered a relapse. Schirmer I test (SIt) were monitored before and one week after surgery. BUT was continuously tested for 3 times, and tear film instability was identified at BUT < 10 s; SIt was tested for 5 min, and a length of filter paper wetted less than 5 mm indicated low secretion.
Statistical Analysis
SPSS 21.0 (SPSS Inc., Chicago, IL, USA) was employed for statistical analysis. The measurement data were expressed by x ± SD, and the intergroup comparison adopted t-test. The counting data were expressed by [nð%Þ], and the intergroup comparison adopted chi-square test. Difference was considered statistically significant at P < 0:05.
General Data.
There was no difference in general data between the two groups (P > 0:05), as shown in Table 1. Upper eyelid retraction in group A and group B at 1 month after surgery was 0:65 ± 0:14 mm and 0:32 ± 0:11 mm, respectively, and those values were 0:54 ± 0:12 mm and 0:21 ± 0:07 mm at 3 months after surgery. It is suggested that the upper eyelid retraction in group B was shorter than that in group A at 1 month and 3 months after surgery (P < 0:05), as shown in Figure 1 4.4. Lid Lag after Surgery. Lid lag in group A and group B was 56.14% and 35.85%, respectively, at 1 month after surgery, and those values were 36.84% and 18.87% at 3 months after surgery. The lid lag in group B was lower than that in group A at 1 month and 3 months after surgery (P < 0:05), as shown in Figure 3.
Ocular
Surface before and after Surgery. The BUT in group A and group B was 16:68 ± 3:29 s and 16:33 ± 3:18 s, respectively, before surgery, while after surgery, the values were 15:74 ± 2:78 s and 15:26 ± 2:59 s. The SIt in group A and group B was 12:84 ± 1:54 mm and 12:46 ± 1:48 mm, respectively, before surgery, while after surgery, the values were 11:76 ± 1:46 mm and 11:54 ± 1:32 mm. There was no difference in BUT and SIt between the two groups both before and after surgery (P < 0:05), as shown in Figure 4.
Comparison of Corrective Effect.
Corrective effect in group B was better than that in group A after surgery (P < 0:05); see Table 2.
Comparison of Complications.
Postsurgical complications in group B were fewer than those in group A (P < 0:05), as shown in Table 3.
Comparison of Patient
Satisfaction. Patient satisfaction in group B was higher than that in group A (P < 0:05), as shown in Table 4.
Discussion
Ptosis, a common disease encountered in ocular plastic surgery [12], refers to drooping or displacement of the upper eyelid, accompanied by narrowing of vertical palpebral fissure. Ptosis is generally mild and insignificant, but it may cause visual impairment in a few patients whose pupil is completely covered [13][14][15], affecting the quality of life and increasing the burden. In this study, we compared the efficacy of CFS suspension and frontalis suspension, and it turned out that the upper eyelid retraction, lid lag, and ROM of patients undergoing CFS suspension improved better than those undergoing frontalis suspension. This may be due to the long relaxation time of elastic materials used in frontalis suspension leads to unstable results and upper eyelid retraction. In frontalis suspension, excessive movement of the frontalis muscle may induce inflammation, infection, extravasation, extrusion of materials, eyelid deformation, and involuntary paroxysmal movement of eyelids in the upward direction [16]. In comparison, CFS is less invasive and harmful to tissues and blood vessels and does not change the movement direction of the upper eyelid, thereby reducing lid lag. This may be one of the reasons why CFS suspension is better than frontalis suspension. Tear film is a protective coating lining the outermost layer of corneal epithelium that plays a pivotal role in maintaining eye health [17,18]. It prevents excessive evaporation and entry of dust and other foreign particles, resists bacterial infection, lubricates eyelids, and maintains optimal visual performance [19,20]. SIt is the most commonly used method to evaluate the production of aqueous tears [21], and BUT has been widely used to measure tear film stability and diagnose common tear issues [22]. Generally, plastic surgery or repair of the upper eyelid may lead to decreased corneal sensation and increased tear production in the early stage after surgery. However, in this study, there was no difference in ocular surface alterations between the two groups. Frontalis suspension has no effect on the lacrimal and accessory lacrimal glands and can control tear secretion [23]. Therefore, it is suggested that both CFS suspension and frontalis suspension have no significant influence on the ocular surface of patients.
Our findings demonstrated that CFS resulted in fewer postsurgical complications. In frontalis suspension, materials are used to connect the eyelid to the eyebrow, and dysfunctional eyelid is lifted through the frontalis muscle [24], whereas CFS suspension connects the special muscle sheath of levator in CFS with levator muscle to suspend the eyelid, thus reducing complications such as infection, extrusion, breakage, and granuloma formation. This may also be one of the reasons for higher satisfaction of patients undergoing CFS suspension. There is evidence that CFS suspension has good and lasting efficacy and short recovery time in ptosis, which is worth popularizing [25].
There are several limitations in this study. We have not yet evaluated the effects of the two surgical methods on inflammatory factors nor on the quality of life and revision rates.
To sum up, CFS suspension is effective in the treatment of severe ptosis, with fewer complications and long-lasting efficacy.
Data Availability
The authors confirm that the data supporting the findings of this study are available within the article.
Conflicts of Interest
No conflict of interest exists.
|
2021-11-21T16:26:17.749Z
|
2021-11-19T00:00:00.000
|
{
"year": 2021,
"sha1": "6be4aead15455c581d056254a0fc7284c1ba6548",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/journals/bmri/2021/1837458.pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "ac0f24ff17f670d1c13bde873eac83efbef82af0",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
116488764
|
pes2o/s2orc
|
v3-fos-license
|
The Sustainable Church: A New Way to Look at the Place of Worship
For centuries, the notions of sacred and development were closely related in European culture, both in the field of architecture, and, more broadly, in the arts. Sustainability, in this respect, mostly appeared in non-architectural terms. (The word “sustain” appears multiple times in the Bible, but mostly in relation to humans: me, you, him, them.) Beginning with the Enlightenment, a gap has developed between the two, which is still experienced, and which results in a general distrust, misin-formation, and, accordingly, a fundamental misunderstanding between artists, architects and the church. Is the gap too wide to reconnect these two notions? The changes of the 20th and 21st Centuries, having affected and continuing to affect Europe, represent a valid need for the different congregations to rethink their role, and the role of their places of worship. This paper highlights some positive examples of modern and contemporary sacred architecture, designed to reflect an awareness of today’s issues — sustainability, attention to environmental and social issues.
Introduction
The title of this paper seems to hide a controversial wording. How can sustainability and new ways of looking at the place of worship be present in one phrase?
One of Klaus Douglass' direct points from his 96 radical theses, speaks for the attitude of this paper, which attempts to solve this tension.
What Sustainability?
The question of sustainability is a current, sensitive and frequently addressed issue, and with our everyday experiences, a serious one.
Sustainability, meaning "the ability to be maintained at a certain rate or level" and "the property of biological systems to remain diverse and productive indefinitely" as defined in Wikipedia, provides 196,000,000 results on a Google search in 0.48 seconds, so we can readily refer to it as commonplace.
Sustainability, as a public perception, was generally connected to issues of environmental protection, waste management, economic consumption and recycling; in recent years, the benefits of sharing economies and knowledge, collaboration, service design or even political design have been involved, mostly on the professional side.
Eco-spirituality, the science of connecting ecology with spirituality, which aims at bringing religion and environmental activism together, "a manifestation of the spiritual connection between human beings and the environment" (Lincoln, 2000), as well as environmental awareness are approaches with stronger focus on the sustainability of the created world.
The concept of the sustainable church, however, goes further: with the inclusion of congregation (Russel, 2016), and in certain cases, with the critique of the current establishments.
The Sustainable Church, a California-based organisation propose "The Germination of a New Church in a Post-Christendom World" by following their three-folded mission "to liberate communities through sustainable-minded education, cultivate relationships through a lived spiritual ecology, and propagate an economy of simple and sustainable living." (The Sustainable Church) In line with this, when we speak of the 'sustainable church' we need to clarify what we mean by 'church'. The words signifying the edifice and the organisation in different languages reflect different concepts. In the author's native language, Hungarian, the word 'templom' signifying the church building comes from the Latin 'templum' referring to the space, the place, the sacred place. The English wording 'church' can be translated as 'the Lord's (house)', while in several Neo-Latin languages the term is derived from the Greek 'ekklesia' (see French 'église', Spanish 'iglesia').
These meanings reflect highly different understandings (the place, the house of the Lord, an object, an earthly transcription of the heavenly Jerusalem, "where the sacred manifests itself in space, the real unveils itself, the world comes into existence" (Eliade, 1987, p.63), versus an edifice including the worshippers, the congregation.
A place, a house or a house with people, that is the question. Is it enough to think of eco-friendly, off-the-grid sacred architecture created from locally sourced materials, such as thatched churches with rooftop solar panels or, using comprehensive contemporary design thinking, and considering the parts only in terms of the whole, should we speak of church buildings and structure, congregation and institutions as one? (See Fig. 1) Most of the "sustainable" solutions offered to religious institutions, unfortunately, reflect the first two concepts. However, there are companies like Future Church (Beck Architecture) offering not only 'sustainable' churches (churches with "an example of stewardship of God's creation and a greater benefit to the community using sustainable building concepts", but also 'flexible' churches (able to follow the changing needs of an evolving congregation), or 'found' churches (transportable building structures). Their solutions offered include all modern technologies assisting the user experience, too.
To understand why we need new solutions, it is important to reflect on the past.
For centuries, in Europe and the Western world, Christianity developed to be the state religion. The growing urban centres with increasing populations also meant larger congregations and groups of worshippers who needed bigger church buildings and institutionalised systems to provide adequate religious services. At the same time, accordingly, the income and the sustainability of the institutions was secure.
The second major schism 2 , initiated by the Reformation, impacted this security; following the acts of tolerance, the divided groups of worshippers required multiple points of worship (several coexisting churches and congregations in the same settlement), with smaller congregations and increasing problems of economic sustainability.
The Enlightenment and the following trend of secularisation further increased the problem by establishing a slowly declining participation in religious practices in the Western world. This was (and is) not always understood, accepted and followed by the views and actions of the institutionalised religion: whereas, in the period of growth, the smaller church buildings were replaced by larger ones, in the era of decline, large cathedrals were not demolished and replaced by smaller churches and organisations. This led to requirements for external (for example state) financing or, in the absence of such, to closure (followed by the sale, functional and architectural conversion or demolition) of churches.
Nevertheless, even in the last Century, there have been good, modern days examples of the developing understanding of sustainability. An early example is the Benedictine Abbey Christ in the Desert, Abiquiu, New Mexico (See Fig. 2). The monastery, founded in 1964, was designed by the great architect and furniture designer George Nakashima and built from locally sourced materials. It is operating with a sustainable off-the-grid system and has since developed two dependent monasteries in Mexico, as well as a mostly Vietnamese community near Dallas, in Kerens, Texas.
The more recent trend and terms in this field are 'regenerative architecture,' "the practice of engaging the natural world 2 The first major schism, also called the East-West Schism of 1054, was an event that precipitated the final separation between the Eastern Christian churches and the Western church. The second major schism, Reformation, celebrate their 500th anniversary this year, in 2017. as the medium for, and generator of the architecture" which "responds to and utilizes the living and natural systems that exist on a site that become the »building blocks« of the architecture" (Littman, 2009, p.iii), as well as 'positive impact architecture'. Attia dates the start of regenerative architecture from 2016, connects it to the paradigm of 'Recovery' 3 , and defines 'Positive Impact Building' as a natural state of the regenerative sustainable building seeking "the highest efficiency in the management of combined resources and a maximum generation of renewable resources." (Attia, 2016, p.397)
Current Issues
For an overview of the biggest challenges humanity, and as a part of that, Christianity currently faces, we have to look at the changes to the world map. The issues include global warming, the decreasing ice cap, rapidly growing world population and urbanisation, the growth of mega-cities, uneven distribution of population and wealth, unequal access to water, and migration, just to name a few.
While religiosity is generally on the rise globally (See Fig. 3), in the Global North, due to the secularisation that began with the Enlightenment, a decline in religious adherence, trust and participation in religious institutions and practices is experienced. In the Global South, it is the opposite.
There are, however, exceptions. For example, the "re-Christianised" 4 world of several Central and East European countries, where states, following the transition from the communist There has also been a similar phenomenon in North America: the rise of "mega-churches" (churches with 10 or 20 thousand worshippers) in line with the rise of the 'alt-right' 5 movement is a good example. However, this is only one side of the story, as Peter Beinart argues: "Americans -long known for their piety -were fleeing organized religion in increasing numbers. The vast majority still believed in God. But the share that rejected any religious affiliation was growing fast, rising from 6 percent in 1992 to 22 percent in 2014. Among Millennials, the figure was 35 percent." (Beinart, 2017) However, a growth of religion, especially Christianity, in the Global South (South America, Africa, particularly Sub Saharan Africa and East and South East Asia), is occurring, and in places like Sub Saharan Africa, it is predicted to grow further (Pew Research Center, 2017).
The effect of humanity on the world has been addressed innumerable times with differing success. In scientific terms, 5 Shortened for 'Alternative Right' the term refers to "a right-wing, primarily online political movement or grouping based in the U.S. whose members reject mainstream conservative politics and espouse extremist beliefs and policies typically centered on ideas of white nationalism" according to the definition in the Merriam-Webster Dictionary. the more precise classifications of the era, as Schneider rightfully points out in his essay, is the nowadays fashionable, originally, geological term, 'Anthropocene' (Schneider, 2017). The more recent terms 'Capitalocene' and 'Necrocene' have also been applied. The term Anthropocene, first used by Paul J. Crutzen appeared in 2002: "The Anthropocene could be said to have started in the latter part of the eighteenth century when analyses of air trapped in polar ice showed the beginning of growing global concentrations of carbon dioxide and methane. This date also happens to coincide with James Watt's design of the steam engine in 1784." (Crutzen, 2002, p.23) As we can see, Crutzen counts the period beginning with Watt's steam engine; although, some scholars, like Damian Carrington, calculate it from a later date: "The new epoch should begin about 1950, the experts said, and was likely to be defined by the radioactive elements dispersed across the planet by nuclear bomb tests, although an array of other signals, including plastic pollution, soot from power stations, concrete, and even the bones left by the global proliferation of the domestic chicken were now under consideration." (Carrington, 2016) Jason W. Moore, accepting the Industrial Revolution as a turning point, creates a different perspective in his recent papers, arguing for "the Capitalocene, understood as a system of power, profit and re/production in the web of life," representing "the creativity of capitalist development." He also mentions the 'Necrocene' of "deep extremism," as "a system that not only accumulates capital but drives extinction" (Moore, 2017a, 597). Moore sees the rise of capitalism as an "environment-making revolution" making use of "Cheap Nature," including "cheap human nature," particularly over the past five centuries; thus, radically transforming our world, resulting in the current ecological crisis. Moore quotes Einstein's point: "We can't solve problems by using the same kind of thinking we used when we created them," and to 'ease our souls' adds: "The bad news is that we find ourselves at multiple tipping points -including the destabilization of biospheric conditions that have sustained humanity since the dawn of the Holocene, some 12,000 years ago. The good news is that our ways of knowing -and acting -are also radically changing" (Moore, 2017b, 35).
Losses and Gains
Moore's arguments have many common points with the comprehensive view of design thinking and design culture (WHAT is happening? + WHY is it happening? + HOW to answer/act?). Not only in his views of our world of the 'design capitalism' but also of the radically rethought systems of understanding and reacting to arisen issues. Similarly, it is important to clarify the negative and positive experiences regarding the sustainability of the church and sketch possible positive examples.
Loss One: Destroyed
Perhaps the most shocking experience of all is the destruction and discontinuation of a phenomenon, sacred object or building, illustrated by the following examples.
Berlin, 1985, a Protestant (Lutheran) church coincidentally and tragically named 'Versöhnungs-kirche' (Church of Reconciliation). The church was unfortunately located in the no-gozone separating East and West Berlin, marked by the Berlin Wall. Due to its position, people could not visit the church. In 1985, against all protests, the East German 'Democratic' Republic decided to destroy the building by blowing it up with explosives. (See Fig. 4) The image of the falling church tower remained a symbol of the blind radicalism and hatred towards all religious systems. After the unification of the two Germanies, in 2000 a new chapel (Kapelle der Versöhnung) was built on the site of the destroyed church.
1988, Bözödújfalu / Bezidu Nou / Neudorf, Romania was a village that stood in the way of the Communist dictator, Ceausescu's dream of establishing a dam and a water reservoir. Inhabitants of the village (Hungarian, Gypsy and Romanian people of Catholic, Unitarian, Orthodox and Sabbatical religion) were forced to move out, and the village was flooded. The image of the slowly disappearing church tower remaining the only visible point of the flooded village became a symbol of the totalitarianism wishing to eliminate history, religion and national minorities. (The church has since collapsed.)
Loss Two: Out of Use
Due to the changes referred to earlier, there are church buildings in the Global North that are becoming unsustainable. A possibly cynical though correct expression of the phenomenon is the 'redundant church' used mainly in the UK for Anglican churches becoming empty. The same phenomenon is experienced in several places such as France and the USA, but also in countries like Hungary. In some villages, there are still people wishing to go to church, but due to the small size and the economically unsustainable nature of the congregation, the lack of priests and pastors, several churches are only used on very rare occasions, if at all.
Loss Three: Sold
Some of the no longer used churches are sold as vacant real estate and 'reused' for other secular functions -sports halls, dance clubs, bars, galleries, shops or homes. This subject is so popular that we can find many examples of reused church buildings, as well as several theses discussing it comprehensively. Kiley (2004) and Lueg (2011) bring several examples from Germany and the United States.
Loss Four: Solo Souls
The nature of worship (see 'ekklesia') would assume a community, that is several people worshipping God. The Bible also states "For where two or three are gathered together in My name, there am I in the midst of them." (Matthew 18:20) Praying alone is possible, but the luxury of a private chapel (and not a private church) was and is still an exceptional case. Nevertheless, this phenomenon of luxury is still experienced today, which may also concur with the idea of separation, the growth of spirituality as a form of private faith, replacing religious practice. We can find several examples of such private chapels in Austria and Germany, but perhaps the most striking example was designed by the Italian architect Michele de Lucchi and built in Auerberg, Germany: the small chapel is designed for solo use, with one round window allowing a single perspective on a distant cross on top of a hill.
Gain One: Timeless Value
The latter example already shows us architecture of lasting value. Although this characteristic is not always considered among the features of sustainability, we should highlight, that architectural value is one of the most crucial points in defining the sustainability of an edifice. In the author's homeland, Hungary, there have been many items of sacred architecture built since the proclamation of the republic in 1990, although many were already outdated during construction. Ugly, dysfunctional churches might be physically sustainable, but their value is highly questionable, whereas a well-designed and self-sufficing church can surprise visitors and worshippers decades or even centuries after it was built.
Gain Two: Just Enough
While for centuries, the cathedrals of growing size prevailed in the northern hemisphere, decreasing congregations represent the biggest challenge, as pointed out by Jákó Fehérváry OSB. 6 6 Personal note after a comment by Jákó Fehérváry OSB at an open session talk on contemporary sacred architecture, Fuga Architectural Center, Budapest, on February 3, 2017 A church designed for hundreds cannot suit and be maintained by a congregation of ten or less. The principle of "small is beautiful" 7 might be a possible solution of contemporary sacred architecture in the Western world (See Fig. 5).
Gain Three: Shared (Common House)
Another solution to the decreasing congregations' needs and capacities is the idea of sharing economy: several different denominations on an ecumenical basis, or even different faiths using a shared, common sacred space for worship and other congregational occasions.
Interfaith chapels are usually found in locations with major and more complex communities. For example, at universities in the United States (like Chapman University, Orange, CA and University of Rochester, NY); whereas, ecumenical churches provide solutions mostly for parochial use in places with a dominantly similar religion, for example in smaller settlements in Hungary (with examples in Herceghalom, Hortobágy and Sajósenye, Hungary).
Gain Four: New Territories
This paper has already mentioned the out-of-use and re-used churches. However, this phenomenon also operates in the opposite direction. Having understood the message of the Turkish proverb transferred to Western thinking by Sir Francis Bacon ("if the mountain won't come to Muhammad then Muhammad must go to the mountain") 8 , several religious organisations understood that the changing habits of people would need churches to be established out of traditional urban centres, in new urban hubs -like airports or shopping complexes. In certain cases, making use of the closed and unused entire premises, complete shopping malls have been converted into churches, as in the case of the Beech Park Baptist Church, Oliver Springs, TN. (See Fig. 6)
Gain Five: Resistance
Certain churches represent more in structural qualities and meaning than just a religious institution. The church of San Paolo Apostle in Foligno (Perugia), Italy, designed by Massimiliano and Doriana Mandrelli Fuksas, is an example of that. Finished in 2009, besides being an edifice capable of resisting earthquakes (frequent in the region), it is a symbol of rebirth after the Umbria-Marche earthquakes of 1997, occupying a place where temporary housing for the then displaced residents once sat.
Gain Six: Think Big
Whereas in the Global North, decreasing population, religious adherence and congregations gave rise to the need for smaller churches, in the Global South, it is just the opposite. However, copying sacred architecture from the northern hemisphere, which used to be a habit in the times of colonialism, is not the best solution due to differing climatic conditions, habits and cultural-architectural traditions, so local solutions are needed.
A good example is the Sacred Heart Cathedral of Kericho in Kenya by John McAslan + Partners; built for the Diocese established in 1995. (See Fig. 7) The congregation is growing, the church, built using natural materials honouring the faith and the cost sensitivity of the rural community, seats 1,500.
Gain Seven: Fast Reaction, Low Budget
Erecting a church building is regarded as a major investment, requiring robust funding and a significant amount of time. Perseverance, dedication or 'civil disobedience' to quote Thoreau's wording, seasoned with creativity might, however, lead to great solutions.
Temporary churches can answer the needs of events, festivals, or other temporary uses -from serving as a chapel for construction workers to victims of natural disasters. The awarded Japanese architect, Shigeru Ban built recycled cardboard and paper structures for survivors of major catastrophes. Among his buildings, there are also churches to provide housing not only for the body but for the tormented soul. Ban has called to life the Voluntary Architects' Network, a movement to aid those in need.
From the professional side, a recent example, answering one of the major current issues, migration and the refugee crisis, was created by two students from the Yale School of Architecture. Lucas Boyd and Chad Greenlee came up with a radically new solution of 'pop-up' sacred buildings to use in refugee camps. Their concept was first exhibited at the 2016 Venice Biennale. (See Fig. 8) They believe that "While [places of worship] do not provide a basic need for an individual's biological survival, they do represent a fundamental aspect of not only an individual's life beyond utility, but an identity within the collective, a familiar place of being-and this is something that we consider synonymous with being human-a requirement for the persistence of culture" (Doroteo, 2016).
There are also excellent examples from the non-professional side. From the spontaneously erected St. Michael's Eritrean Christian Orthodox Church, constructed by Eritrean refugees in the Jungle, Calais, using recycled lath and greenhouse plastic foil (See Fig. 9), to the low-cost temporary baptismal immersion pool of the Free Christian Congregation of Szigetszentmiklós, Hungary was created from a cheap inflatable plastic pool -also used by children in the congregation for entertainment purposes. (See Fig. 10)
Conclusions
It is reasonable to conclude that a seemingly peripheral subject, the place of worship and its sustainability, although significantly challenged, can also challenge us to view it from different perspectives, and create complex, sometimes provoking and often creative thoughts.
There are, however, certain fields/actions needed: 1) Instead of isolated initiatives a complex (design) approach.
2) Well-understood design thinking from designerly ways of knowing, artistic and empiric research through design thinking and acting (NOT product design only!). 3) Sharing and collaborating (to facilitate a faster solution). 4) Education (to clear misunderstanding and secure potential partners in thinking and action).
As Mircea Eliade perfectly formulated: "It must be understood that the cosmicization of unknown territories is always a consecration; to organize a space is to repeat the paradigmatic work of the gods." (Eliade, 1987, p.32)
|
2019-04-16T13:27:23.863Z
|
2018-01-04T00:00:00.000
|
{
"year": 2018,
"sha1": "ca228df26faf701181f98a960741b1081a486019",
"oa_license": "CCBY",
"oa_url": "https://pp.bme.hu/ar/article/download/11574/7899",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "9b7773e39089ee56c690df81e4fd1e64ca4b0f3b",
"s2fieldsofstudy": [
"Art"
],
"extfieldsofstudy": [
"Sociology"
]
}
|
788124
|
pes2o/s2orc
|
v3-fos-license
|
Modeling of Human Prokineticin Receptors: Interactions with Novel Small-Molecule Binders and Potential Off-Target Drugs
Background and Motivation The Prokineticin receptor (PKR) 1 and 2 subtypes are novel members of family A GPCRs, which exhibit an unusually high degree of sequence similarity. Prokineticins (PKs), their cognate ligands, are small secreted proteins of ∼80 amino acids; however, non-peptidic low-molecular weight antagonists have also been identified. PKs and their receptors play important roles under various physiological conditions such as maintaining circadian rhythm and pain perception, as well as regulating angiogenesis and modulating immunity. Identifying binding sites for known antagonists and for additional potential binders will facilitate studying and regulating these novel receptors. Blocking PKRs may serve as a therapeutic tool for various diseases, including acute pain, inflammation and cancer. Methods and Results Ligand-based pharmacophore models were derived from known antagonists, and virtual screening performed on the DrugBank dataset identified potential human PKR (hPKR) ligands with novel scaffolds. Interestingly, these included several HIV protease inhibitors for which endothelial cell dysfunction is a documented side effect. Our results suggest that the side effects might be due to inhibition of the PKR signaling pathway. Docking of known binders to a 3D homology model of hPKR1 is in agreement with the well-established canonical TM-bundle binding site of family A GPCRs. Furthermore, the docking results highlight residues that may form specific contacts with the ligands. These contacts provide structural explanation for the importance of several chemical features that were obtained from the structure-activity analysis of known binders. With the exception of a single loop residue that might be perused in the future for obtaining subtype-specific regulation, the results suggest an identical TM-bundle binding site for hPKR1 and hPKR2. In addition, analysis of the intracellular regions highlights variable regions that may provide subtype specificity.
Prokineticins and their receptors
Mammalian prokineticins 1 and 2 (PK1 and PK2) are two secreted proteins of about 80-90 residues in length, which belong to the AVIT protein family [1,2,3]. Their structure includes 10 conserved cysteine residues that create five disulphide-bridged motifs (colipase fold) and an identical (AVIT) motif in the Nterminus.
PKs are expressed in a wide range of peripheral tissues, including the nervous, immune, and cardiovascular systems, as well as in the steroidogenic glands, gastrointestinal tract, and bone marrow [3,4,5,6].
PKs serve as the cognate ligands for two highly similar Gprotein-coupled receptors (GPCRs) termed PKs receptor subtypes 1 and 2 (hPKR1 and hPKR2 in humans) [5,7,8]. These receptors are characterized by seven membrane-spanning a-helical segments separated by alternating intracellular and extracellular loop regions. The two subtypes are unique members of family A GPCRs in terms of subtype similarity, sharing 85% sequence identity -a particularly high value among known GPCRs. For example, the sequence identity between the b1 and b2-adrenergic receptor subtypes, which are well established drug targets, is 57%. Most sequence variation between the hPKR subtypes is concentrated in the extracellular N terminal region, which contains a nine-residue insert in hPKR1 compared with hPKR2, as well as in the second intracellular loop (ICL2) and in the C terminal tail ( Figure 1). PKR1 is mainly expressed in peripheral tissues, such as the endocrine organs and reproductive system, the gastrointestinal tract, lungs, and the circulatory system [8,9], whereas PKR2, which is also expressed in peripheral endocrine organs [8], is the main subtype in the central nervous system. Interestingly, PKR1 is expressed in endothelial cells of large vessels while PKR2 is strongly expressed in fenestrated endothelial cells of the heart and corpus luteum [10,11]. Expression analysis of PKRs in heteroge-neous systems revealed that they bind and are activated by nanomolar concentrations of both recombinant PKs, though PK2 was shown to have a slightly higher affinity for both receptors than was PK1 [12]. Hence, in different tissues, specific signaling outcomes following receptor activation may be mediated by different ligand-receptor combinations, in accordance with the expression profile of both ligands and receptors in that tissue [13]. Activation of PKRs leads to diverse signaling outcomes, including mobilization of calcium, stimulation of phosphoinositide turnover, and activation of the p44/p42 MAPK cascade in overexpressed cells, as well as in endothelial cells naturally expressing PKRs [5,7,8,14,15] leading to the divergent functions of PKs. Differential signaling capabilities of the PKRs is achieved by coupling to several different G proteins, as previously demonstrated [11].
The PKR system is involved in different pathological conditions such as heart failure, abdominal aortic aneurysm, colorectal cancer, neuroblastoma, polycystic ovary syndrome, and Kallman syndrome [16]. While Kallman syndrome is clearly linked to mutations in the PKR2 gene, it is not currently established whether the other diverse biological functions and pathological conditions are the result of a delicate balance of both PKR subtypes or depend solely on one of them.
Recently, small-molecule, non-peptidic PKR antagonists have been identified through a high-throughput screening procedure [17,18,19,20]. These guanidine triazinedione-based compounds competitively inhibit calcium mobilization following PKR activation by PKs in transfected cells, in the nanomolar range [17]. However, no selectivity for one of the subtypes has been observed [17].
A better understanding of the PK system can generate pharmacological tools that will affect diverse areas such as development, immune response, and endocrine function. Therefore, the molecular details underlying PK receptor interactions, both with their cognate ligands and small-molecule modulators, and with downstream signaling partners, as well as the molecular basis of differential signaling, are of great fundamental and applied interest.
Structural information has been instrumental in delineating interactions and the rational development of specific inhibitors [21]. However, for many years only the X-ray structure of bovine Rhodopsin has been available [22] as the sole representative structure of the large superfamily of seven-transmembrane (7TM) domain GPCRs.
In recent years crystallographic data on GPCRs has significantly grown and now includes, for example, structures of the b1 and b2adrenergic receptors, in both active and inactive states, the agonist-and antagonist-bound A 2A adenosine receptor, and the CXCR4 chemokine receptor bound to small-molecule and peptide antagonists. The new structures were reviewed in [23,24] and ligand-receptor interactions were summarized in [25]. Nevertheless, the vast number of GPCR family members still requires using computational 3D models of GPCRs for studying these receptors and for drug discovery. Different strategies for GPCR homology modeling have been developed in recent years (reviewed in [26]), and these models have been successfully used for virtual ligand screening (VLS) procedures, to identify novel GPCR binders [21].
Successful in-silico screening approaches, applied to GPCR drug discovery, include both structure-based and ligand-based techniques and their combinations. Molecular ligand docking is the most widely used computational structure-based approach, employed to predict whether small-molecule ligands from a compound library will bind to the target's binding site. When a ligand-receptor complex is available, either from an X-ray structure or an experimentally verified model, a structure-based pharmacophore model describing the possible interaction points between the ligand and the receptor can be generated using different algorithms and later used for screening compound libraries [27]. In ligand-based VLS procedures, the pharmacophore is generated via superposition of 3D structures of several known active ligands, followed by extracting the common chemical features responsible for their biological activity. This approach is often used when no reliable structure of the target is available [28].
In this study, we analyzed known active small-molecule antagonists of hPKRs vs. inactive compounds to derive ligandbased pharmacophore models. The resulting highly selective pharmacophore model was used in a VLS procedure to identify potential hPKR binders from the DrugBank database. The interactions of both known and predicted binders with the modeled 3D structure of the receptor were analyzed and compared with available data on other GPCR-ligand complexes. This supports the feasibility of binding in the TM-bundle and provides testable hypotheses regarding interacting residues. The potential cross-reactivity of the predicted binders with the hPKRs was discussed in light of prospective 'off-target' effects. The challenges and possible venues for identifying subtype-specific binders are addressed in the discussion section.
Homology Modeling and Refinement
All-atom homology models of human PKR1 and PKR2 were generated using the I-TASSER server [29], which employs a fragment-based method. Here a hierarchical approach to protein structure modeling is used in which fragments are excised from multiple template structures and reassembled, based on threading alignments. Sequence alignment of modeled receptor subtypes and the structural templates were generated by the TCoffee server [30]; this information is available in the Supporting Information as figure S1. A total of 5 models per receptor subtype were obtained. The model with the highest C-score (a confidence score calculated by I-Tasser) for each receptor subtype, was exported to Discovery Studio 2.5 (DS2.5; Accelrys, Inc.) for further refinement. In DS2.5, the model quality was assessed using the protein report tool, and the models were further refined by energy minimization using the CHARMM force field [31]. The models were then subjected to side-chain refinement using the SCWRL4 program [32], and to an additional round of energy minimization using the Smart Minimizer algorithm, as implemented in DS2.5. The resulting models were visually inspected to ensure that the side chains of the most conserved residues in each helix are aligned to the templates. An example of these structural alignments appears in figure S2.
For validation purposes, we also generated homology models of the turkey b1 adrenergic receptor (b1adr) and the human b2 adrenergic receptor (b2adr). The b1adr homology model is based on 4 different b2adr crystal structures (PDB codes -3SN6, 2RH1, 3NY8, and 3d4S); the b2adr model is based on the crystal structures of b1adr (2VT4, 2YCW), the Dopamine D3 receptor (3PBL), and the histamine H1 receptor (3RZE). The models were subjected to the same refinement procedure as previously described, namely, deletion of loops, energy minimization, and side chain refinement, followed by an additional step of energy minimization. Sometimes the side chain rotamers were manually adjusted, following the aforementioned refinement procedure.
Throughout this article, receptor residues are referred to by their one-letter code, followed by their full sequence number in hPKR1. TM residues also have a superscript numbering system according to Ballesteros-Weinstein numbering [33]; the most conserved residue in a given TM is assigned the index X.50, where X is the TM number, and the remaining residues are numbered relative to this position.
Identification of a 7TM-bundle binding site
The location of a potential small-molecule-TM binding cavity was identified based on (1) identification of receptor cavities using the "eraser" and "flood-filling" algorithms [34], as implemented in DS2.5 and (2) use of two energy-based methods that locate energetically favorable binding sites -Q-SiteFinder [35], an algorithm that uses the interaction energy between the protein and a simple Van der Waals probe to locate energetically favorable binding sites, and SiteHound [36], which uses a carbon probe to similarly identify regions of the protein characterized by favorable interactions. A common site that encompasses the results from the latter two methods was determined as the TM-bundle binding site for small molecules.
SAR Analysis
A dataset of 107 small-molecule hPKR antagonists was assembled from the literature [18,19]. All ligands were built using DS2.5. pKa values were calculated for each ionazable moiety on each ligand, to determine whether the ligand would be charged and which atom would be protonated at a biological pH of 7.5. All ligands were then subjected to the "Prepare Ligands" protocol, to generate tautomers and enantiomers, and to set standard formal charges.
For the SAR study, the dataset was divided into two parts: (1) active molecules, with IC 50 values below 0.05 mM, and (2) inactive molecules, with IC 50 values above 1 mM. IC 50 values were measured in the calcium mobilization assay [18,19]. When possible, the molecules were divided into pairs of active and inactive molecules that differ in only one chemical group, and all possible pharmacophore features were computed using the "Feature mapping" protocol (DS 2.5). These pairs were then compared to determine those pharmacophore features' importance for biological activity.
Ligand-Based Pharmacophore Models
The HipHop algorithm [37], implemented in DS2.5, was used for constructing ligand-based pharmacophore models. This algorithm derives common features of pharmacophore models using information from a set of active compounds. The two most active hPKR antagonists (the lowest IC 50 values in the Janssen patent [19,20]) were selected as 'reference compounds' from the data set described above, and an additional antagonist molecule with a different scaffold was added from a dataset recently published [38], and were used to generate the models (figure S3). Ten models in total were generated, presenting different combinations of chemical features. These models were first evaluated by their ability to successfully recapture all known active hPKR antagonists. An enrichment study was performed to evaluate the pharmacophore models. The dataset contains 56 active PKR antagonists seeded in a random library of 5909 decoys retrieved from the ZINC database [39]. The decoys were selected so that they will have general and chemical properties similar to the known hPKR antagonists (by filtering the ZINC database according to the average molecular properties of known hPKR antagonists 6 4 Standard Deviation range). In this way, enrichment is not simply achieved by separating trivial features (such as mass, overall charge, etc.). These properties included AlogP (a log of the calculated octanol-water partition coefficient, which measures the extent of a substance hydrophilicity or hydrophobicity), molecular weight, formal charge, the number of hydrogen bond donors and acceptors, and the number of rotatable bonds. All molecules were prepared as previously described, and a conformational set of 50 "best-quality" low-energy conformations was generated for each molecule. All conformers within 20 kcal/mol from the global energy minimum were included in the set. The dataset was screened using the "ligand pharmacophore mapping" protocol (DS2.5), with the minimum interference distance set to 1Å and the maximum omitted features set to 0. All other protocol parameters were maintained at the default settings.
To analyze enrichment results and select the best pharmacophore model for subsequent virtual screening, ROC curves were constructed for each model, where the fraction of identified known binders (true positives, representing sensitivity) was plotted against the fraction of identified library molecules (false positives; 1specificity). Based on this analysis, the best pharmacophore model was selected for virtual screening purposes.
Generation of the DrugBank data set and virtual screening
The DrugBank database [40] (release 2.0), which contains ,4900 drug entries, including 1382 FDA-approved smallmolecule drugs, 123 FDA-approved biotech (protein/peptide) drugs, 71 nutraceuticals, and over 3240 experimental drugs, was used for Virtual Screening. The database was filtered, based on the average molecular properties of known hPKR antagonists 6 4SD (standard deviation). These properties included AlogP, molecular weight, the number of hydrogen bond donors and acceptors, the formal charge, and the number of rotatable bonds. The liberal 64SD interval was chosen because the calculated range of molecular properties of the known antagonists was very narrow. Molecules were retained only if their formal charge was neutral or positive, since the known compounds were positively charged. This resulted in a test set containing 432 molecules. All molecules were prepared as previously described, and a set of 50 "best-quality" low-energy conformations was generated for each molecule; all conformations were within 20 kcal/mol from the global energy minimum.
The data set was screened against the pharmacophore model (chosen from the ROC analysis) using the "ligand pharmacophore mapping" protocol in DS2.5. All protocol settings were maintained at default settings except for minimum interference distance, which was set to 1Å and the maximum omitted features was set to 0. To prioritize the virtual hits, fit values were extracted, to reflect the quality of molecule mapping onto the pharmacophore. Only molecules with fit values above the enrichment ROC curve cutoff that identifies 100% of the known PKR antagonists (FitVa-lue>2.85746) were retained as virtual hits for further analysis.
The similarity between the virtual hits and known smallmolecule PKR antagonists was evaluated by calculating the Tanimoto coefficient distance measure using the 'Find similar molecules by fingerprints' module in DS2.5, which calculates the number of AND bits normalized by the number of OR bits, according to SA/(SA+SB+SC), where SA is the number of AND bits (bits present in both the target and the reference), SB is the number of bits in the target but not the reference, and SC is the number of bits in the reference but not the target.
Small-Molecule Docking
Molecular docking of the small-molecule hPKR antagonists dataset (active and inactive molecules), as well as of virtual hits, to the hPKR1 homology model, was performed using LigandFit [34] as implemented in DS2.5. LigandFit is a shape complementarybased algorithm that performs flexible ligand-rigid protein docking. In our experiments, the binding site was defined as a 284.8 Å 3 TM cavity area, surrounded by binding site residues identified using the energy-based methods described above. Default algorithm settings were used for docking. The final ligand poses were selected based on their empirical LigScore docking score [41]. Here we used the (default) Dreiding force field to calculate the VdW interactions.
All docking experiments were conducted on a model without extracellular and intracellular loops. Loop configurations are highly variable among the GPCR crystal structures [42]. Therefore, deleting the loops in order to reduce the uncertainty stemming from inaccurately predicted loops is a common practice in the field [43,44,45].
To further validate our protocol, we also performed molecular redocking of the small-molecule partial inverse agonist carazolol and the antagonist cyanopindolol to their original X-ray structures from which loops were deleted, and to loopless homology models of b1adr and b2adr using LigandFit, as previously described. As in the case of docking to the hPKR1 model, this procedure was performed on loopless X-ray structures and models. The binding site was identified from receptor cavities using the "eraser" and "flood-filling" algorithms, as implemented in DS2.5. The highest scoring LigScore poses were selected as the representative solutions. The ligand-receptor poses were compared to the corresponding X-ray complexes by (1) calculating the root mean square deviation (RMSD) of heavy ligand atoms from their respective counterparts in the crystallized ligand after superposition of the docked ligand-receptor complex onto the X-ray structure; (2) calculating the number of correct atomic contacts in the docked ligand-receptor complex compared with the X-ray complex, where an atomic contact is defined as a pair of heavy ligand and protein atoms located at a distance of less than 4Å ; and by (3) comparing the overall number of correctly predicted interacting residues in the docked complex to the X-ray complex (where interacting residues are also defined as residues located less than 4Å from the ligand).
Small-molecule docking analysis
The resulting ligand poses of the known hPKR antagonists were analyzed to identify all ligand-receptor hydrogen bonds, charged interactions, and hydrophobic interactions.
The specific interactions formed between the ligand and binding site residues were quantified to determine the best scoring pose of each ligand (active and inactive). For each ligand pose, a vector indicating whether this pose forms a specific hydrogen bond and/or hydrophobic p interaction with each of the binding site residues was generated. The data were hierarchically clustered using the clustergram function of the bioinformatics toolbox in Matlab version 7.10.0.499 (R2010a). The pairwise distance between these vectors was computed using the Hamming distance method, which calculates the percentage of coordinates that differ. For a m-by-n data matrix X, which is treated as m (1-by-n) row vectors x1, x2, …, xm, the distance between the vector xs and xt is defined as follows: where # is the number of vectors that differ. The poses of the virtual hits ligands were further filtered using structure-based constraints derived from analyzing the interactions between known PKR antagonists and the receptor, obtained in the known binders docking section of this work. The constraints included (1) an electrostatic interaction between the ligand and Glu119 2.61 , (2) at least one hydrogen bond between the ligand and Arg144 3.32 , and/or Arg307 6.58 , and (3) at least two hydrophobic interactions (p-p or p-cation) between the ligand and Arg144 3.32 and/or Arg307 6.58 .
Evolutionary selection analysis
Evolutionary selection analysis of the PKR subtypes' coding DNA sequences was carried out using the Selecton server (version 2.4) [46,47]. The Selecton server is an on-line resource which automatically calculates the ratio (v) between non-synonymous (Ka) and synonymous (Ks) substitutions, to identify the selection forces acting at each site of the protein. Sites with v.1 are indicative of positive Darwinian selection, and sites with v,1 suggest purifying selection. As input, we used the homologous coding DNA sequences of 13 mammalian species for each subtype, namely, human, rat, mouse, bovine, rabbit, panda, chimpanzee, orangutan, dog, gorilla, guinea pig, macaque and marmoset. We used the default algorithm options and the obtained results were tested for statistical significance using the likelihood ratio test, as implemented in the server.
SAR analysis highlights molecular features essential for small-molecule antagonistic activity
A review of the literature revealed a group of non-peptidic compounds that act as small-molecule hPKR antagonists, with no apparent selectivity toward one of the subtypes [17,18,19,20,38]. The reported compounds have either a guanidine triazinedione or a morpholine carboxamide scaffold. We decided to perform structure-activity relationship (SAR) analysis of the triazine-based compounds, owing to the more detailed pharmacological data available for these compounds [17,18,19,20].
SAR analysis of the reported molecules with and without antagonistic activity toward hPKR provides hints about the geometrical arrangement of chemical features essential for the biological activity. By comparing pairs of active and inactive compounds that differ in only one functional group, one can determine the activity-inducing chemical groups at each position.
To this end, we constructed a dataset of 107 molecules identified by high-throughput screening. This included 51 molecules that we defined as inactive (Ca 2+ mobilization IC 50 higher than 1 mM), and 56 molecules defined as active (IC 50 below 0.05 mM). All compounds share the guanidine triazinedione scaffold (see figure 2), which includes (a) a heterocyclic ring baring three nitrogen atoms and two oxygen atoms, and (b) a guanidine group, which is attached to the main ring by a linker (position Q in figure 2).
Where possible, the dataset was divided into pairs of active and inactive molecules that differ in only one functional group. This resulted in 13 representative pairs of molecules that were used to determine which specific chemical features in these molecules are important for antagonistic activity, in addition to the main triazine ring and guanidine group. As shown in figure 2, the four variable positions in the scaffold -A1, D, L2, and Q, were compared among the 13 pairs, and the activity-facilitating chemical groups at each position were determined. These include the following features: (1) Positions A1 and D require an aromatic ring with a hydrogen bond acceptor in position 4 of the ring. (2) Position L2 may only accept the structure -NH(CH 2 )-.
(3) Position Q may include up to four hydrogen bond donors, a positive ionizable feature, and an aromatic ring bearing a hydrogen bond acceptor.
In conclusion, the SAR analysis revealed 2D chemical features in the molecules, which may be important for receptor binding and activation. Next, these features will be used to generate ligandbased pharmacophore models for virtual screening (next section) and in docking experiments to determine the plausible ligandreceptor contacts (see below).
Ligand-based virtual screening for novel PKR binders
To identify novel potential hPKR binders, we utilized a ligandbased procedure in which molecules are evaluated by their similarity to a characteristic 3D fingerprint of known ligands, the pharmacophore model. This model is a 3D ensemble of the essential chemical features necessary to exert optimal interactions with a specific biological target and to trigger its biological response. The purpose of the pharmacophore modeling procedure is to extract these chemical features from a set of known ligands with the highest biological activity. The two most potent (IC 50 ,0.02 mM for intracellular Ca 2+ mobilization) hPKR antagonists were selected from the dataset described in the previous section, to form the training set (compounds 1 and 2, figure S3). In addition, we also incorporated data from a third compound published recently (compound 3 in figure S3), to ensure good coverage of the available chemical space [38].
The HipHop algorithm [37] was used to generate common features of pharmacophore models. This algorithm generated 10 different models, which were first tested for their ability to identify all known active hPKR triazine-based antagonists (data not shown). During the pharmacophore generation and analysis procedure, we also projected the knowledge generated during our 2D SAR analysis onto the 3D pharmacophore models, and chose those that best fit the activity-facilitating chemical features identified in the 2D SAR analysis previously described. The two best models, which recaptured the highest number of known active hPKR binders and included all required 2D features deduced from the SAR analysis, were chosen for further analysis. The 3D spatial relationship and geometric parameters of the models are presented in figure 3A. Both models share a positive ionizable feature and a hydrogen bond acceptor, corresponding to the N3 atom and O1 atoms on the main ring, respectively (figure 2). However, the models vary in the degree of hydrophobicity tolerated: model 2 is more restrictive, presenting one aromatic ring feature and one hydrophobic feature, whereas model 1 is more promiscuous, presenting two general hydrophobic features. The aromatic/hydrophobic features correspond to positions A1 and D of the scaffold (figure 2). Figure 3A also shows the mapping of one of the training set molecules onto the pharmacophore model. All four features of both models are mapped well, giving a fitness value (FitValue) of 3.602 and 3.378 for hypotheses 1 and 2, respectively. The fitness value measures how well the ligand fits the pharmacophore. For a four-feature pharmacophore the maximal FitValue is 4.
Next, we performed an enrichment study to ultimately evaluate the pharmacophore model's performance. Our aim was to verify that the pharmacophores are not only able to identify the known antagonists, but do so specifically with minimal false positives. To this end, a dataset of 56 known active hPKR small-molecule antagonists was seeded in a library of 5909 random molecules retrieved from the ZINC database [39]. The random molecules had chemical properties (such as molecular weight and formal charge), similar to the known PKR antagonists, to ensure that the Both models successfully identified all known compounds embedded in the library. The quality of mapping was assessed by generating receiver operating characteristic (ROC) curves for each model (figure 3B), taking into consideration the ranking of fitness values of each virtual hit. The plots provide an objective, quantitative measure of whether a test discriminates between two populations. As can be seen from figure 3B, both models perform extremely well, generating almost a perfect curve. The difference in the curves highlights the difference in pharmacophore stringency. The stricter pharmacophore model 2 (which has an aromatic ring feature instead of a hydrophobic feature) performs best in identifying a large number of true positives while maintaining a low false positive rate. Thus, we used model 2 in the subsequent virtual screening experiments. Note that it is possible that some of the random molecules that were identified by the pharmacophore models, and received fitness values similar to known antagonists, may be potential hPKR binders. A list of these ZINC molecules is available in table S1. These compounds differ structurally from the known small-molecule hPKR antagonists because the maximal similarity score calculated using the Tanimoto coefficient, between them and the known antagonists, is 0.2626 (compounds that have Tanimoto coefficient values .0.85 are generally considered similar to each other).
This analysis revealed that the ligand-based pharmacophore models can be used successfully in a VLS study and that they can identify completely different and novel scaffolds, which nevertheless possess the required chemical features.
hPKR1 as a potential off-target of known drugs Recent work by Keiser and colleagues [48] utilized a chemical similarity approach to predict new targets for established drugs. Interestingly, they showed that although drugs are intended to be selective, some of them do bind to several different targets, which can explain drug side effects and efficacy, and may suggest new indications for many drugs. Inspired by this work, we decided to explore the possibility that hPKRs can bind established drugs. Thus, we applied the virtual screening procedure to a dataset of molecules retrieved from the DrugBank database (release 2.0) [40]. The DrugBank database [40] combines detailed drug (chemical, pharmacological, and pharmaceutical) data with comprehensive drug target (sequence, structure, and pathway) information. It contains 4886 molecules, which include FDA-approved smallmolecule drugs, experimental drugs, FDA-approved large-molecule (biotech) drugs and nutraceuticals. As a first step in the VLS procedure, the initial dataset was pre-filtered, prior to screening, according to the average molecular properties of known active compounds 6 4SD. The pre-filtered set consisted of 432 molecules that met these criteria. This set was then queried with the pharmacophore, using the 'ligand pharmacophore mapping' module in DS2.5 (Accelrys, Inc.). A total of 124 hits were retrieved from the screening. Only those hits that had FitValues above a cutoff defined according to the pharmacophores' enrichment curve, which identifies 100% of the known antagonists, were further analyzed, to ensure that compatibility with the pharmacophore of the molecules selected is as good as for the known antagonists. This resulted in 10 hits with FitValues above the cutoff (see figure 4). These include 3 FDA-approved drugs and 7 experimental drugs. All these compounds target enzymes, identified by their EC numbers (corresponding to the chemical reactions they catalyze): most of the targets are peptidases (EC 3.4.11, 3.4.21 and 3.4.23), including aminopeptidases, serine proteases, and aspartic endopeptidases, and an additional single compound targets a receptor protein-tyrosine kinase (EC 2.7.10). The fact that only two classes of enzymes were identified is quite striking, in particular, when taking into account that these two groups combined represent only 2.6% of the targets in the screened set. This may indicate the intrinsic ability of hPKRs to bind compounds originally intended for this set of targets. The calculated similarity between the known hPKR antagonists and the hits identified using the Tanimoto coefficients is shown in figure 4: the highest similarity score was 0.165563, indicating that the identified hits are dissimilar from the known hPKR antagonists, as was also observed for the ZINC hits (see Table S1). Interestingly, when calculating the structural similarity within the EC3.4 and 2.7.10 hits, the highest value is 0.679, indicating consistency in the ability to recognize structurally diverse compounds (see figure S4).
To predict which residues in the receptor may interact with the key pharmacophores identified in the SAR analysis previously mentioned, and to assess whether the novel ligands harboring the essential pharmacophors fit into the binding site in the receptor, we carried out homology modeling and docking studies of the known and predicted ligands.
Molecular Modeling of hPKR1 predicts the smallmolecule binding site in the typical TM-bundle site of Family A GPCRs
As a first step in analyzing small-molecule binding to hPKRs, we generated homology models of the two subtypes, hPKR1 and hPKR2. The models were built using the I-Tasser server [29]. These multiple-template models are based on X-ray structures of bovine Rhodopsin (PDB codes: 1L9H) [49], the human b2adrenergic receptor (2RH1) [50], and the human A 2A -adenosine receptor (3EML) [51]. The overall sequence identity shared between the PKR subtypes and each of the three templates is approximately 20%. Although this value is quite low, it is similar to cases in which modeling has been applied, and it satisfactorily recaptured the binding site and binding modes [52]. Furthermore, the sequence alignment of hPKRs and the three template receptors are in good agreement with known structural features of GPCRs (figure S1). Namely, all TM residues known to be highly conserved in family A GPCRs [33] (N 1.50 , D 2.50 , R 3.50 , W 4.50 , P 5.50 , P 6.50 ) are properly aligned. The only exception is the NP 7.50 xxY motif in TM7, which aligns to NT 7.50 LCF in hPKR1.
The initial crude homology model of hPKR1, obtained from I-TASSER, was further refined by energy minimization and side chain optimization. Figure 5 shows the general topology of the refined hPKR1 model. This model exhibits the major characteristics of family A GPCRs, including conservation of all key residues, and a palmitoylated cysteine in the C terminal tail, which forms a putative fourth intracellular loop. Also, similarly to family A GPCR X-ray structures, a conserved disulfide bridge connects the second extracellular loop (ECL2) with the extracellular end of TM3, formed between Cys217 and Cys137, respectively. However, both extracellular and intracellular loops are not very likely to be modeled correctly, due to their low sequence similarity with the template structures, and the fact that loop configurations are highly variable among GPCR crystal structures [42]. The emerging consensus in the field is that these models perform better in docking and virtual screening with no modeled loops at all than with badly modeled loops [43,44,45]. We therefore did not include the extracellular and intracellular loops in the subsequent analysis.
Overall, our hPKR1 model has good conservation of key features shared among family A GPCR members. Conservation of this fold led us to hypothesize that hPKRs possess a 7TM-bundle binding site capable of binding drug-like compounds, similar to the well-established TM bundle binding site typical of many family A GPCRs [25]. This is in addition to a putative extracellular surface binding site, which most likely binds the endogenous hPKR ligands, which are small proteins. Several synthetic small-molecule hPKR antagonists have been recently reported [17,18,19,20,38]. We hypothesized that these small molecules will occupy a pocket within the 7TM bundle [23,53].
To identify the potential locations of a small-molecule-TM binding site, we first mapped all receptor cavities. We then utilized two energy-based methods, namely, Q-SiteFinder [35] and SiteHound [36], to locate the most energetically favorable binding sites by scanning the protein structure for the best interaction energy with different sets of probes. The most energetically favorable site identified by the two methods overlaps; it is located in the upper part of the TM bundle, among TMs 3,4,5,6, and 7. The position of the identified pocket is shown in the insert in Figure 5.
According to the structural superposition of the hPKR1 model on its three template structures, the predicted site is similar in position to the well-established TM-bundle binding site of the solved X-ray structures [54,55]. Furthermore, specific residues lining these pockets, which are important for both agonist and antagonist binding by GPCRs [25], are well aligned with our model ( figure S2).
Comparing the identified TM-bundle binding site between the two subtypes revealed that they are completely conserved, except for one residue in ECL2 -Val207 in hPKR1, which is Phe198 in hPKR2. Figure S5 presents a superposition of the two models, focusing on the binding site. This apparent lack of subtype specificity in the TM-bundle binding site is in agreement with the lack of specificity observed in activity assays of the small-molecule triazine-based antagonists [17], which could suppress calcium mobilization following Bv8 (a PK2 orthologue) stimulation to the same degree, in hPKR1 and hPKR2 transfected cells [17].
We therefore will focus mainly on hPKR1 and will return to the issue of subtype specificity in the Discussion.
Docking of known small-molecule antagonists to hPKR1 binding site and identification of important interacting residues
To understand the mechanistic reasons for the need of particular pharmacophores for ligands activity, one has to look for interactions between the ligands and the receptor.
As a preliminary step, we performed a validation study, aimed at determining whether our modeling and docking procedures can reproduce the bound poses of representative family A GPCR antagonist-receptor crystallographic complexes. We first performed redocking of the cognate ligands carazolol and cyanopindolol, back to the X-ray structures from where they were extracted and from which the loops were deleted. The results indicate that the docking procedure can faithfully reproduce the crystallographic complex to a very high degree (figure S6 -A-C); with excellent ligand RMSD values of 0.89-1.2Å between the docked pose and the X-ray structure (see table S2), in accordance with similar previous studies [44,56,57]. The redocking process could also reproduce the majority of heavy atomic ligand-receptor contacts observed in the X-ray complex and more generally, the correct interacting binding site residues and specific ligandreceptor hydrogen bonds, despite docking to loopless structures. Next, we built homology models of b1adr and b2adr and performed docking of the two antagonists into these models to examine the ability of homology modeling, combined with the docking procedure, to accurately reproduce the crystal structures. As can be seen from figure S6 and from the ligand RMSD values in table S2, the results can reproduce the correct positioning of the ligand in the binding site, and at least part of the molecule can be correctly superimposed onto the crystallized ligand, although the resulting RMSD values are above 2Å . The overall prediction of interacting binding site residues is good, correctly predicting 47-66% of the interactions (see Table S2).
We therefore performed molecular docking of the smallmolecule hPKR antagonist dataset to the predicted hPKR1 allosteric 7TM-bundle binding site, to explore the possible receptor-ligand interactions.
The set of 56 active and 51 inactive small-molecule antagonists was subjected to flexible ligand -rigid receptor docking to the hPKR1 model using LigandFit (as implemented in DS2.5, Accelrys, Inc.) [34]. For each compound the 50 best energy conformations were generated and docked into the binding site, resulting in an average of 250 docked poses for each molecule.
The final ligand poses for each molecule were selected based on the highest LigScore1 docking score, since no experimental data regarding possible ligand contacting residues was available. The best scoring docking poses were analyzed visually for features that were not taken into account in the docking calculation, such as appropriate filling of the binding site -such that the compound fills the binding site cavity, and does not "stick out". Specific ligand-receptor interactions were monitored across all compounds. Figure 6 shows representative docked poses of two active (A,B) and two inactive compounds (C,D). As shown, the active molecules adopt a confirmation that mainly forms interactions with TMs 2, 3, and 6, such that the ligand is positioned in the center of the cavity, blocking the entry to it and adequately filling the binding site, as described. In contrast, the inactive small molecules are apparently incapable of simultaneously maintaining all of these contacts, and are positioned in different conformations that mostly maintain interactions with only some of the TMs mentioned. For the active compounds, the most prevalent interaction is observed between the ligand and residues Arg144 3.32 and Arg307 6.58 , either through a hydrogen bond or a p-cation interaction. The active ligands interact with at least one of these two residues. In addition, an electrostatic interaction was observed between the active ligands and Glu119 2.61 (as seen from figure 6A, B). To quantify this observation, the specific interactions formed (HB, charged, p-p and p-cation) were monitored across all the best scoring poses of the docked ligands (active and inactive), and the results, which represent the number of specific contacts formed between each ligand and all polar/hydrophobic binding site residues, were clustered ( figure 7).
As shown, the hierarchical structure obtained from the clustering procedure of receptor-ligand contacts only, clearly separates the compounds into sub-trees that correspond to the experimental active/inactive distinction. In the active sub-tree, the ligands form a charged interaction with Glu119 2.61 , and interact mainly with Cys137 3.25 , Arg144 3.32 , and Arg307 6.58 . In contrast, in the inactive sub-tree, the molecules still form interactions with Arg144 3.32 to some extent, but the interactions with Glu119 2.61 , Cys137 3.25 , and Arg307 6.58 are drastically reduced, and instead some of the ligands interact with Thr145 3.33 and Met332 7.47 . In addition, some of the active ligands form either specific interactions or van der Waals contacts with Asn141 3.29 , Phe300 6.51 , and Phe324 7.39 .
All of these positions have been shown experimentally to be important for ligand binding in different family A GPCRs members, ranging from aminergic (such as the b2-adrenergic receptor) to peptide receptors (such as chemokine receptors) [25].
In general, the functional groups in the scaffold, which were identified in our SAR analysis as being important for antagonist activity, form specific interactions within the binding site ( figure 8). Namely, the main triazine ring of the scaffold forms hydrogen bonds through its O and N atoms and p-cation interactions. The two aromatic rings form p-cation interactions and hydrogen bonds through the O/F/Cl atoms at position 4 of the ring, and the positive charge at position Q and hydrogen bond donors interact with residues from helices 2, 3, and 6, predominantly, Glu119 2.61 and Arg144 3.32 , and Arg307 6.58 , as described above. The compatibility of the SAR data with the docking results supports the predicted binding site and modes, and provides a molecular explanation of the importance of particular pharmacophores in the ligand.
The positions predicted to specifically bind essential functional groups in the ligands (mainly Glu119 2.61 , Arg144 3.32 , and Arg307 6.58 ) can be mutated in future studies, to confirm their role in ligand binding inside the predicted TM-bundle cavity, as recently applied to other GPCRs [58] and summarized in [25].
Docking of virtual hits to the hPKR1 model suggests potential binders
Next, the 10 molecules identified through ligand-based virtual screening of the DrugBank database were docked to the hPKR1 homology model. All docking experiments were performed using LigandFit, as described in the previous section. However, here the analysis was more strict: the resulting docked poses of each molecule were post-processed using structure-based filters derived from the analysis of ligand-receptor interactions formed between the known small-molecule antagonists and receptor residues (see Materials and Methods for details) and were not only selected based on the highest docking score. The underlying hypothesis is that the same interactions are perused by the potential ligands as by the known antagonists. Selected poses of all 10 molecules successfully passed this procedure. All poses were visually examined by checking that they adequately fill the binding site and form the desired specific interactions. All 10 molecules successfully passed this analysis and were considered as candidate compounds that may serve as potential hPKR binders.
Next, we focused on a representative of the three FDAapproved hits, which we identified as potential ligands for hPKRs, namely, Indinavir, Argatroban, and Lapatinib. Figure 9 shows representative examples of docking of Indivavir, Argatroban, and Lapatinib to the hPKR1 binding site. As shown, the compounds adequately fill the binding site and are predicted to form specific interactions with residues found to be important for binding of the known hPKR antagonists, namely, charged interaction with Glu119 2.61 , and hydrogen bonds and/or stacking interactions with Arg144 3.32 and Arg307 6.58 . These compounds also form interactions with additional binding site residues, which interact with the known binders (see figure 7).
Each of the compounds is widely used in the clinic, and provides well-tested and safe compounds that may also exert their actions via hPKRs. The potential cross-reactivity of one such candidate drug, Indinavir, is further addressed in the Discussion.
Discussion
Prokineticin receptor (PKR) subtypes 1 and 2 are novel members of family A GPCRs. Prokineticins and their receptors play important roles under various physiological conditions, and blocking PKRs may serve as a therapeutic tool for various pathologies, including acute pain, circadian rhythm disturbances, inflammation, and cancer.
In this study, we extracted essential functional groups from small-molecule PKR antagonists that were previously reported, using structure-activity relationship analysis, and we used them in a virtual screening procedure. Consequently, we were able to identify several potential PKR ligands with novel scaffolds. Interestingly, the virtual hits included several HIV protease inhibitors that are discussed next in terms of known side effects and potential new indications of these drugs. Computational docking of known ligands to the multiple-template 3D model of a PKR's structure enabled us to predict ligand-receptor contacts and provided a structural explanation of the importance of the chemical features we obtained from the analysis of known PKR binders.
Homology modeling of the hPKR subtypes and docking of known small-molecule antagonists
In this study we modeled the 3D structure of the hPKR subtypes and explored the interactions formed between hPKR1 and small-molecule binders. Our computational analysis revealed that hPKR1 is predicted to possess a TM-bundle binding site, capable of binding small-molecule ligands, similarly to other GPCR family A members, such as the aminergic receptors. This occurs despite the fact that the receptors' endogenous ligands are relatively large proteins, which most likely bind the extracellular surface of the receptors. The latter is demonstrated in experimental data on Kallmann syndrome mutations. Kallmann syndrome is a human disease characterized by the association of hypogonadotropic hypogonadism and anosmia. Several loss-of-function mutations in the human PKR2 gene have been found in Kallmann patients [45]. Among them is the p.Q210R mutation in ECL2 (corresponding to Q219 in hPKR1), which completely abolishes native ligand binding and has no affinity for the orthologue ligand MIT1 (Mamba intestinal toxin 1, which shares 60% sequence identity with PK2, and contains the essential Nterminal motif AVITGA) [59]. Existence of both an orthosteric extracellular binding site capable of binding small proteins and an allosteric TM binding site was already shown in family A GPCRs. For example, the melanin-concentrating hormone receptor (MCHR), for which the endogenous ligand is a peptide, also binds small-molecule antagonists in its TM-bundle cavity [60,61].
The predicted TM-bundle site is identical between the two hPKR subtypes, except for one residue in ECL2 (Val207 in PKR1 corresponding to Phe198 in PKR2). Since this is a hydrophobic residue in both receptors, its side chain will probably face the TM cavity and not the solvent. Indeed, the residue was modeled to face the TM cavity and was predicted by the energy-based methods to be part of the TM-bundle binding site. If specific binders are pursued in the future, this, albeit minor, difference between two hydrophobic amino acids might be targeted.
Through docking experiments of the known hPKR antagonists, we have identified important residues that interact at this site, namely, Glu119 2.61 , Arg144 3.32 , and Arg307 6.58 . These residues form specific interactions with the chemical features of the ligand that we found in our SAR analysis to be essential for the molecules' antagonistic activity. Specifically, Arg144 3.32 is analogous to Asp113 3.32 of the b2-adrenergic receptor, which is an experimentally established receptor interaction site for both agonists and antagonists [62]. This position has also been shown to be important for ligand binding in many other family A GPCRs as well as in other branches of the GPCR super-family, such as the bitter taste receptors (summarized in [25]). This position is highly conserved within different family A GPCRs subfamilies, but it is divergent among these subfamilies, for example, an Asp in the aminergic receptors, compared with a Thr in hormone protein receptors. It was therefore assumed that the position may play a role in specific ligand binding within certain subfamilies [55]. Similarly, we suggest that although the residue type is divergent between the different subfamilies (for example, a positive Arg in the Prokineticin receptors compared with a negative Asp in aminergic receptors), its importance in ligand binding in such diverse receptors may be due to its spatial location in the TMbundle binding site. In addition, Arg307 6.58 is analogous to Tyr290 6.58 of the GnRH receptor, which was found to be important for binding the GnRH I and GnRH II peptide ligands [63]. The equivalent residue at position 6.58 is also suggested, by mutagenesis studies, to play an important role in ligand binding and/or receptor activation of other peptide GPCRs, such as the NK2 tachykinin receptor [64], the AT 1A angiotensin receptor [65], and the CXCR1 chemokine receptor [66]. Moreover, in the recent crystallographic X-ray structure of the CXCR4 chemokine receptor bound to a cyclic peptide antagonist, a specific interaction between position 6.58 and the peptide was observed [67]. Hence, position 6.58 may serve as a common position for the binding of both peptides (such as the endogenous ligands PK1 and PK2) and small-molecule ligands.
Finally, in our analysis position 2.61, which is occupied by a Glutamic acid in hPKRs, was found to be essential for antagonist binding, since an electrostatic interaction may be formed between this negatively charged residue and the positive charge on the ligand. This may explain the need for the positive charge on the known small-molecule antagonists, which was indeed deduced from the structure-activity analysis. The ligand's positive charge may interact with the negatively charged residue in receptor position 2.61, which was also shown to be important in ligand binding in the dopamine receptors [55].
In summary, the observed interactions reinforce the predicted putative binding site and may support the concept that family A GPCRs share a common small-molecule binding pocket inside the TM cavity, regardless of the nature of their cognate ligand.
Docking of ligands to a single experimental or model structure of a GPCR receptor has been shown to reproduce the binding mode of the ligands in several cases [44,68,69], to enrich known ligands in structure-based virtual screening campaigns [57,70], and to rationalize specificity profiles of GPCR antagonists [71] and thus was the approach taken here.
In several non-GPCR cases, good docking results have been reported using multiple receptor conformations [72]. Such an approach was successful for a sequence identity range of 30-60% between models and available templates [73].
Though GPCR homology models typically have a lower sequence identity to their potential templates, using ensembles of multiple homology models or of a perturbed X-ray structure may nevertheless be a viable approach, as was recently reported [74,75,76]. Current breakthroughs in X-ray structure determination of GPCRs will enable systematic testing of the most appropriate receptor structure representation and of docking performance, against the benchmark of experimental structures.
Identification of potential novel hPKR binders
Our study used SAR of known hPKR binders to identify novel potential binders of hPKR1, and highlighted possible 'off-target' effects of FDA-approved drugs. Interestingly, the novel candidates share little structural chemical similarity with the known hPKR binders but share the same pharmacophores and similar putative interactions within the TM-bundle binding site. Such a "scaffold hopping" result is common and is often sought after in drug discovery. The term is based on the assumption that the same desired biological activity may be achieved by different molecules that maintain some of the essential chemical features as the template molecule, i.e., the molecule possesses the desired biological activity on the target, but is structurally dissimilar otherwise. Scaffold hopping is required, for instance, when the central scaffold is involved in specific interactions with the target, and changing it may lead to improved binding affinity. One example of successful scaffold hopping, resulting in a structurally diverse structure, is the selective D2 and D3 dopamine receptor agonist Quinpirole [77].
The newly identified potential cross-reactivity may have two implications -it might explain the side effects of these drugs (as discussed next), and it might also suggest novel roles for these drugs as potential hPKR inhibitors. One such example of potential cross-reactivity identified through our VLS procedure is Indinavir.
Indinavir sulfate is a hydroxyaminopentane amide and a potent and specific FDA-approved inhibitor of the HIV protease. Indinavir acts as a competitive inhibitor, binding to the active site of the enzyme, since it contains a hydroxyethylene scaffold that mimics the normal peptide linkage (cleaved by the HIV protease) but which itself cannot be cleaved. Thus, the HIV protease cannot perform its normal function -proteolytic processing of precursor viral proteins into mature viral proteins. Specific adverse effects associated with Indinavir include hyperbilirubinaemia and cutaneous toxicities [78,79], accelerated atherosclerosis, and an increased rate of cardiovascular disease [80]. Protease inhibitors may cause cardiovascular disease by inducing insulin resistance, dyslipidemia, or by endothelial dysfunction.
A study of the effects of HIV protease inhibitors on endothelial function showed that in healthy HIV-negative subjects, Indinavir induced impaired endothelium-dependent vasodilation after 4 weeks of treatment owing to reduced nitric oxide (NO) production/release by the endothelial cells or reduced NO bioavailability [81]. HIV patients treated with Indinavir presented lower urinary excretion of the NO metabolite NO 3 [82]. Wang et al. demonstrated that Indinavir, at a clinical plasma concentration, can cause endothelial dysfunction through eNOS (endothelial nitric oxide synthase) down-regulation in porcine pulmonary artery rings and HPAECs (human pulmonary arterial endothelial cells), and that endothelium-dependent relaxation of the vessel rings was also reduced following Indinavir treatment [83].
Endothelium-derived NO is the principal vasoactive factor that is produced by eNOS. Lin et al. showed that PK1 induced eNOS phosphorylation in bovine adrenal cortex-derived endothelial cells [14]. It has also been shown that PK1 suppressed giant contraction in the circular muscles of mouse colon, and that this effect was blocked by the eNOS inhibitor L-NAME. In vitro, PK1 stimulated the release of NO from longitudinal musclemyenteric plexus cultures [84]. We have found that PK1 treatment elevated eNOS mRNA levels in luteal endothelial cells. Cells were also treated in the presence of PI3/Akt pathway inhibitor, which caused a 20-40% reduction in eNOS levels (Levit and Meidan, unpublished data).
These opposing effects of Indinavir and PK1 on eNOS levels and NO production/release are compatible with the chemically based hypothesis arising from the current work, which suggests that Indinavir can bind to the hPKR subtypes by acting as a PKR antagonist. We suggest that this would subsequently reduce eNOS expression levels in endothelial cells and impair NO bioavailability, leading, at least partially, to the observed Indinavir side effects in HIV patients. This hypothesis should be explored experimentally in future studies to determine the possible binding of Indinavir to hPKRs and its subsequent effects.
The proposed hypothesis is in accordance with the concept of polypharmacology -specific binding and activity of a drug at two or more molecular targets, often across target boundaries. For example, ligands targeting aminergic family A GPCRs were also found to act on protein kinases [85]. These "off-target" drug actions can induce adverse side effects and increased toxicity. In contrast, there are also cases where the drug is a "magic shotgun", and its clinical effect results from its action on many targets, which in turn enhances its efficacy. For example, drugs acting through multiple GPCRs have been found to be more effective in treating psychiatric diseases such as schizophrenia and depression [86]. This concept was demonstrated by Keiser and colleagues [48] who utilized a statistics-based chemoinformatics approach to predict off-targets for ,900 FDA-approved small-molecule drugs and ,2800 pharmaceutical compounds. The targets were compared by the similarity of the ligands that bind to them. This comparison resulted in 3832 predictions, of which 184 were inspected by literature searches. Finally, the authors tested 30 of the predictions experimentally, by radioligand competition binding assays. For example, the a1 adrenergic receptor antagonist Doralese was predicted and observed to bind to the dopamine D4 receptor (both are aminergic GPCRs), and most interestingly, the HIV-1 reverse transcriptase inhibitor Rescriptor was found to bind to the histamine H4 receptor. The latter observation crosses major target boundaries. These two targets have neither an evolutionary or functional role nor structural similarity in common. However, some of the known side effects of Rescriptor treatment include painful rashes. This observation is similar to our findings of possible interactions of Indinavir and the other enzyme-targeting VLS hits with the PKR subtypes.
In summary, defining the selective and non-selective actions of GPCR targeting drugs will help in advancing our understanding of the drugs' biological action and the observed clinical effect, including side effects.
Potential differences between the hPKR subtypes
Both subtypes are capable of binding the cognate ligands at approximately the same affinity [12]. Therefore, the diversification of cellular events following activation of the subtypes [16] is not likely to stem from the extracellular loop regions. This suggestion warrants further experimental investigation. Our study also suggests, in agreement with previous findings, that small-molecule antagonists are not likely to easily differentiate between the subtypes. This is because the TM-bundle small-molecule binding site identified in this study is identical in its amino acid composition for the two hPKR subtypes. Thus, an intriguing question arises: what molecular mechanisms are responsible for PKRs' differential signaling patterns?
The variation of protein amino acid composition in the extracellular and intracellular regions of PKRs is significant (represented as black-filled circles in Fig. 1). Moreover, analysis of the level of selection acting on the two PKR subtypes, by calculating the ratio between non-synonymous (Ka) and synonymous (Ks) substitutions [46,47] predicted purifying selection for the transmembrane helices of both subtypes ( figure S7). This analysis should be expanded in future studies, as PKR subtype sequences from additional species become available.
The variation in amino acid composition in the intracellular regions of the PKR subtypes may affect at least two signaling events: receptor phosphorylation by kinases and the receptors' coupling to G proteins. We therefore suggest that this region is most likely to be involved in differential signaling, as detailed next.
Interaction with G proteins
Differential coupling of PKR subtypes to G proteins has been demonstrated experimentally (reviewed in [16]). Coupling of PKR1 to G a11 in endothelial cells induces MAPK and PI3/Akt phosphorylation, which promotes endothelial cell proliferation, migration and angiogenesis [11]. In cardiomyocytes, coupling of PKR1 to G aq/11 induces PI3/Akt phosphorylation and protects cardiomyocytes against hypoxic insult. In contrast, PKR2 couples to G a12 in endothelial cells, causing G a12 internalization and down-regulation of ZO-1 expression, leading to vacuolarization and fenestration of these cells. In cardiomyocytes, PKR2 acts through G a12 and G aq/11 coupling and increases cell size and sarcomere numbers, leading to eccentric hypertrophy [16]. Thus, sites of interactions with G-proteins may represent an additional factor affecting PKR subtype specificity.
Receptor Phosphorylation
It is well established that GPCR phosphorylation is a complex process involving a range of different protein kinases that can phosphorylate the same receptor at different sites. This may result in differential signaling outcomes, which can be tailored in a tissuespecific manner to regulate biological processes [87]. We suggest that part of the differential signaling of PKR subtypes may be due to differential phosphorylation of the intracellular parts of the receptors. Namely, phospho-acceptor sites may be missing in one subtype or another, and analogous positions may be phosphorylated by different kinases due to variation in the positions surrounding the phospho-acceptor residue (which is conserved between subtypes), thus, changing the kinase recognition sequence [88]. Hence, using different combinations of kinases for each subtype results in different phosphorylation signatures. This phosphorylation signature translates to a code that directs the signaling outcome of the receptor. This may include two types of signaling events: (a) common phosphorylation events for both subtypes will mediate common regulatory features such as arrestin recruitment and internalization and (b) subtype-specific events will mediate specific signaling functions related to the specialized physiological role of the receptor subtype. Preliminary analysis using prediction tools for phosphorylation sites suggests that Thr178 (Thr169) in the second intracellular loop and Tyr365 (Gln356) in the cytoplasmic tail of hPKR1 (hPKR2) may represent subtype-specific phosphorylation-related sites (Levit, Meidan and Niv, unpublished data). Further experimental studies are required to elucidate the role of receptor phosphorylation in specific signaling events following activation of PKR subtypes.
Conclusions
In conclusion, we have identified a small-molecule TM-bundle site that can accommodate the known small-molecule hPKR antagonists. Hence, it can be explored in the future for designing additional PKR-targeting compounds. The VLS procedure identified tens of compounds that are likely to affect hPKRs. Interestingly, FDA-approved drugs may also bind to these receptors, and in some instances, such as with Indinavir, this binding may provide a potential explanation for the drug's side effects. One residue in ECL2 is different between the two subtypes (Val207 in hPKR1 corresponding to Phe198 in PKR2), and several residues in the intracellular loops may affect phosphorylation. These residues may be exploited for designing subtypespecific pharmacological tools, to target different pathological conditions involving hPKRs. Figure S1 Structure-based multiple sequence alignment of modeled PKR subtypes and X-ray structures used as templates in the modeling procedure. Alignment was generated by the TCoffee server. The most conserved residue in each helix is shaded yellow and is indicated by its Ballesteros-Weinstein numbering [33]. Identical residues are in red and similar residues are in blue. bRho -bovine Rhodopsin (PDB code:1L9H), hB2ADR -human b2-adrenergic receptor (2RH1), hA2AR -human A 2A adenosine receptor (3EML). The sequence of T4 lysozyme that was fused to the hB2ADR and hA2AR proteins to facilitate structure determination was removed prior to alignment, for clarity. (TIF) Figure S2 Structural superposition of the PKR1 model and GPCR X-ray templates used for homology modeling. All structures are shown in ribbon representation. PKR1 is in turquoise, human b2-adrenergic is in orange (A), bovine rhodopsin is in gold (B) and human A 2A -adenosine receptor is in gray (C). (D) Superposition of the hPKR1 model and the b2-adrenergic receptor structure with emphasis on the TM-bundle binding site. The structures are shown in a view looking down on the plane of the membrane from the extracellular surface. Binding site residues experimentally known to be important for ligand binding are denoted as sticks and are labeled with Ballesteros-Weinstein numbering. The T4 lysozyme fusion protein was removed from the b2-adrenergic and the A 2A -adenosine receptor structures, for clarity. Structural superposition was performed using the Matchmaker module in UCFS Chimera version 1.4.1. (TIF) Figure S3 Structures of the three known PKR antagonists that were used as reference compounds for constructing ligand-based pharmacophore models. (TIF) Figure S4 Structural similarity between the identified VLS hits plotted as a heatmap. The degree of similarity was calculated using the Tanimoto coefficient, as described in Methods, and ranges between 0 (completely dissimilar com-pounds) and 1 (identical compounds). Compounds with similarity values .0.85 are usually considered structurally similar. Color intensity corresponds to the similarity value according to the legend. The heatmap was prepared using Matlab version 7. 10
|
2014-10-01T00:00:00.000Z
|
2011-11-21T00:00:00.000
|
{
"year": 2011,
"sha1": "380b53a5e2ea4f856015a849fc5f14d50c6c3f12",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0027990&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "380b53a5e2ea4f856015a849fc5f14d50c6c3f12",
"s2fieldsofstudy": [
"Chemistry",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
269798430
|
pes2o/s2orc
|
v3-fos-license
|
Reverse Genetics of Murine Rotavirus: A Comparative Analysis of the Wild-Type and Cell-Culture-Adapted Murine Rotavirus VP4 in Replication and Virulence in Neonatal Mice
Small-animal models and reverse genetics systems are powerful tools for investigating the molecular mechanisms underlying viral replication, virulence, and interaction with the host immune response in vivo. Rotavirus (RV) causes acute gastroenteritis in many young animals and infants worldwide. Murine RV replicates efficiently in the intestines of inoculated suckling pups, causing diarrhea, and spreads efficiently to uninoculated littermates. Because RVs derived from human and other non-mouse animal species do not replicate efficiently in mice, murine RVs are uniquely useful in probing the viral and host determinants of efficient replication and pathogenesis in a species-matched mouse model. Previously, we established an optimized reverse genetics protocol for RV and successfully generated a murine-like RV rD6/2-2g strain that replicates well in both cultured cell lines and in the intestines of inoculated pups. However, rD6/2-2g possesses three out of eleven gene segments derived from simian RV strains, and these three heterologous segments may attenuate viral pathogenicity in vivo. Here, we rescued the first recombinant RV with all 11 gene segments of murine RV origin. Using this virus as a genetic background, we generated a panel of recombinant murine RVs with either N-terminal VP8* or C-terminal VP5* regions chimerized between a cell-culture-adapted murine ETD strain and a non-tissue-culture-adapted murine EW strain and compared the diarrhea rate and fecal RV shedding in pups. The recombinant viruses with VP5* domains derived from the murine EW strain showed slightly more fecal shedding than those with VP5* domains from the ETD strain. The newly characterized full-genome murine RV will be a useful tool for dissecting virus–host interactions and for studying the mechanism of pathogenesis in neonatal mice.
Introduction
Rotavirus (RV) is the most common causative agent of severe acute diarrhea in infants and small animals worldwide [1].While RV has been isolated from many mammalian and avian species, RV infection relatively infrequently demonstrates cross-species transmission and persistence in a heterologous host species, a phenomenon known as host-range restriction (HRR) [2][3][4][5][6][7][8][9][10].For instance, the genome sequences of RV strains isolated from humans generally belong to groups of human RV strains isolated before.RVs from other animal species are occasionally isolated from humans but rarely spread or persist in the human Viruses 2024, 16, 767 2 of 12 population [11].HRR has been exploited to generate two live-attenuated RV vaccines currently used worldwide (i.e., RotaTeq (Merck) and RotaSiil (Serum Institute of India)), using bovine RV strains as a genetic backbone [12].
A natural mono-reassortant RV D6/2 strain was isolated by plaque assays from an intestinal homogenate of a suckling mouse co-infected with wild-type murine EDIM-EW and the tissue-culture-adapted simian RRV strain [5].Unlike wild-type murine RV, D6/2 replicates in cultured cell lines while still efficiently causing diarrhea in inoculated pups [5,10].Of note, 10 of the 11 gene segments of D6/2 are derived from the EDIM-EW strain; for the exception is gene segment 4, which encodes the cell attachment protein VP4 (Table 1).It has been shown that cell-culture-adapted murine RV strains do not cause diarrhea as efficiently as the wild-type murine RV [6,[15][16][17].Therefore, D6/2 provides a unique opportunity to further interrogate the viral determinants of HRR.Since the first plasmid-based reverse genetics was developed for simian SA11 strain [18], reverse genetics has been established for several other animal RV strains [19][20][21][22][23][24][25][26].We previously used D6/2 as a genetic backbone and rescued a recombinant murine-like RV by introducing two more gene segments from the simian SA11 strain: gene segments 1 and 10, which encode viral polymerase VP1 and viral enterotoxin NSP4, respectively (Table 1) [21].The new recombinant virus D6/2 with the two additional genes derived from simian RV (so-called rD6/2-2g) replicated in the intestine of inoculated pups and transmitted to their uninoculated littermates.Using rD6/2-2g as the backbone, we have since demonstrated that murine RV NSP1, an interferon antagonist, plays a critical role in viral replication in vivo [27].In addition, by comparing VP4s from heterologous RV strains in an isogenic rD6/2-2g background, we demonstrated that VP4s from heterologous RV strains contribute to HRR to varying degrees [14].Furthermore, we rescued an rD6/2-2g RV expressing a bioluminescent reporter, Nano-Luciferase, to characterize systemic dissemination of RV in vivo in a non-invasive manner [28].
Despite the utility of rD6/2-2g, this recombinant murine-like RV is a compromise between rescue efficiency and in vivo virulence.It still harbors three gene segments (gene Viruses 2024, 16, 767 3 of 12 segments 1, 4, and 10) from heterologous simian RV strains (Table 1).Considering that these three gene segments may be potentially associated with HRR in mice, it is desirable to create an RV that grows well in culture but has all 11 gene segments derived from a murine strain to be used in a mouse model.Toward that objective, we attempted to replace the three heterologous gene segments in rD6/2-2g with those from a murine-origin RV strain.
The obvious primary challenge for this objective is the RV structural protein VP4.It is the spike protein present on the surface of RV virions and is required for cell attachment and entry in host cells.VP4 is cleaved by trypsin into two distinct domains (Figure 1A).The N-terminal VP8* domain forms the head structure of the virion spike and engages in attachment to the target cell surface.The β-barrel domain in VP5* forms the body of the spike, and the C-terminal region of the VP5* domain functions as the foot of the spike by interacting with VP7 and VP6 proteins in the virion [29][30][31].Here, we used reverse genetics and improved the virulence of rD6/2-2g by rescuing a series of recombinant RVs with all 11 gene segments from a murine RV strain.We also generated murine RV VP4 chimeric viruses between the cell-culture-adapted ETD_822 strain and the wild-type murine EW strain and compared diarrheal diseases in the suckling mouse model.The data suggest that murine RV reverse genetics offers a new tool to study molecular mechanisms of RV replication, virulence, and spread in the homologous murine model.
RV expressing a bioluminescent reporter, Nano-Luciferase, to characterize systemic dissemination of RV in vivo in a non-invasive manner [28].
Despite the utility of rD6/2-2g, this recombinant murine-like RV is a compromise between rescue efficiency and in vivo virulence.It still harbors three gene segments (gene segments 1, 4, and 10) from heterologous simian RV strains (Table 1).Considering that these three gene segments may be potentially associated with HRR in mice, it is desirable to create an RV that grows well in culture but has all 11 gene segments derived from a murine strain to be used in a mouse model.Toward that objective, we attempted to replace the three heterologous gene segments in rD6/2-2g with those from a murine-origin RV strain.
The obvious primary challenge for this objective is the RV structural protein VP4.It is the spike protein present on the surface of RV virions and is required for cell attachment and entry in host cells.VP4 is cleaved by trypsin into two distinct domains (Figure 1A).The N-terminal VP8* domain forms the head structure of the virion spike and engages in attachment to the target cell surface.The β-barrel domain in VP5* forms the body of the spike, and the C-terminal region of the VP5* domain functions as the foot of the spike by interacting with VP7 and VP6 proteins in the virion [29][30][31].Here, we used reverse genetics and improved the virulence of rD6/2-2g by rescuing a series of recombinant RVs with all 11 gene segments from a murine RV strain.We also generated murine RV VP4 chimeric viruses between the cell-culture-adapted ETD_822 strain and the wild-type murine EW strain and compared diarrheal diseases in the suckling mouse model.The data suggest that murine RV reverse genetics offers a new tool to study molecular mechanisms of RV replication, virulence, and spread in the homologous murine model.
Reverse Genetics
Recombinant viruses were generated using an optimized reverse genetics protocol, as reported previously [21].Briefly, we mixed 11 rescue plasmids and one helper plasmid (0.4 µg for nine of the rescue plasmids (excluding for NSP2 and NSP5), 1.2 µg of the two rescue plasmids for NSP2 and NSP5, and 0.8 µg of C3P3-G1) in OPTI-MEM I Reduced-Serum Medium (Thermo Fisher Scientific, Waltham, MA, USA).The mixture of the plasmids was transfected into BHK-T7 cells using TransIT-LT1 (Mirus, Madison, WI, USA).The next day, the medium was replaced with serum-free DMEM and cultured overnight.Then, MA104 cells were added to the BHK-T7 cells and cultured in the presence of 0.5 µg/mL of trypsin.To generate the VP4 chimeric viruses, we replaced the rescue plasmid for VP4 with the appropriate plasmids.Rescued viruses were amplified in MA104 cells, and the VP4 sequence of the viruses was confirmed by DNA sequencing before use.
Focus-Forming Unit Assays
MA104 cells were seeded on 96-well plates and cultured for 2 to 3 days.Virus samples were activated with 5 µg of trypsin, serially diluted with SFM, and inoculated into MA104 cells.The cells were fixed with 10% formalin (Fisherbrand Waltham, MA, USA) 14 h after inoculation, permeabilized with PBS with 0.05% Triton X-100, and stained with rabbit anti-RV DLP and HRP-conjugated anti-rabbit IgG polyclonal antibody (Sigma-Aldrich, St. Louis, MO, USA).RV antigen was visualized with the AEC Substrate Kit and peroxidase (Vector Laboratories, Newark, CA, USA).The number of foci was counted under a microscope, and the virus titer was expressed as FFU/mL.
Mouse Infection
129sv mice were purchased from Taconic Biosciences Inc. and maintained at the animal facility in the Veterinary Medical Unit of the Palo Alto VA Health Care System.Five-day-old 129sv pups were orally inoculated by gastric lavage with 1 × 10 3 FFU of recombinant viruses or 1 × 10 3 DD 50 of the wild-type EW strain.Mice were monitored to collect stool samples by gentle abdominal pressure for 12 days.Stool samples were collected in 40 µL of PBS (+) (CORNING) and stored at −80 • C until use.The animal experiment protocol was approved by the Stanford Institutional Animal Care Committee.
ELISA
The relative quantity of RV fecal shedding was assessed by sandwich ELISA, as previously described, using guinea pig anti-RV TLP antiserum and rabbit anti-RV DLP antiserum generated in the Greenberg lab [34].Briefly, ELISA plates (E&K Scientific Products, Swedesboro, NJ, USA, cat.#EK-25061) were coated with guinea pig anti-RV TLP antiserum and blocked with PBS supplemented with 2% BSA.After washing the plate with PBS containing 0.05% Tween 20, 70 µL of PBS containing 2% BSA and 2 µL of the fecal samples were added to the plate and incubated at 4 • C overnight.The RV antigen in the stool samples was detected by rabbit anti-RV DLP antiserum, HRP conjugated antirabbit IgG (Sigma-Aldrich, St. Louis, MO, USA, cat.#A0545), and peroxidase substrate (SeraCare, Milford, MA, USA).The signal intensity at 450nm was measured with the ELx800 microplate reader (BIO-TEK, Shoreline, WA, USA).
Statistical Analysis
Fecal shedding curves by RVs were analyzed by two-way ANOVAs with the Tukey multiple comparison test using GraphPad Prism 8.
Generation of Recombinant Murine RVs
In a previous study, we synthesized all 11 rescue plasmids from the D6/2 strain.However, we were unable to rescue a completely recombinant D6/2 strain after multiple trials [21].Therefore, we used recombinant D6/2 with gene segments 1 and 10, which encode VP1 and NSP4, from the simian SA11 strain as an alternative approach [21].The amino acid sequence identity of VP1 and NSP4 between the murine EW and the simian SA11 strains showed 86.2% and 62.3% homology, respectively.To generate a recombinant virus with a gene constellation closer to a fully murine RV, we reconstructed rescue plasmids for gene segments 1 and 10 from the D6/2 strain.To our surprise, we obtained recombinant D6/2 (rD6/2) with the new rescue plasmids, despite there being no difference in the cDNA sequences of gene segments 1 and 10 compared with those in the previous failed rescue plasmids.
We next attempted to further optimize gene segment 4 in rD6/2, which is derived from the simian RRV strain.Wild-type murine RV strains (including the EW strain) propagated in mouse intestines do not efficiently infect immortalized cell lines.Previous studies that compared the nucleotide sequences of murine RV before and after adaptation to cultured cells reported that gene segment 4 is one of the determinants for effective viral replication in cultured cell lines [17].It suggests that murine RV from mouse intestines poorly replicates in the cell line possibly due to a partial restriction at the attachment and entry process.Therefore, we used the rescue plasmid for gene segment 4 of the cell-culture-adapted murine ETD_822 strain, and we generated a recombinant virus with 10 gene segments from murine EW and gene segment 4 from the ETD_822 strain (rEW/ETD-VP4) (Table 1).
We compared the nucleotide sequence of VP4 between the EW and ETD_822 strains to better understand the difference in VP4 in the murine RV strain used in this study.Sequence alignment shows that compared with the wild-type EW strain, ETD_822 has only five non-synonymous amino acid substitutions (Y80H, D452N, S470L, T612A, and A711T) in VP4.Y80H is the only amino acid difference found in the VP8* domain, and the VP5* domain has two amino acid differences in either the body (D452N and S470L) or the C-terminal foot (T612A and A711T) regions (Figure 1B).It is known that the cell-cultureadapted EDIM murine RV strains are attenuated in suckling mice in terms of diarrheal dose and duration of shedding while having acquired the ability to replicate in cultured cell lines [6,[15][16][17].We speculated that some amino acids are strongly associated with the adaptation to cultured cell lines, but not all amino acids are necessary for efficient replication in cell lines.To test whether we could rescue a recombinant RV with a VP4 protein closer to the more virulent, non-cell-culture-adapted progenitor EW strain, we constructed three VP4 chimeric plasmids between the ETD_822 and EW strains.These plasmids harbor nucleotide sequences of the VP8*, VP5*-body, or VP5*-foot domains from the EW strain in ETD_822 VP4 (Figure 1B).Of note, we successfully rescued all three chimeric viruses, namely rEW/ETD-VP4-EW-VP8*, rEW/ETD-VP4-EW-VP5*-body, and rEW/ETD-VP4-EW-VP5*-foot.These data suggest that amino acid differences in these three regions in ETD_822 are not individually involved in the adaptation to cultured cell lines.
Comparative Analysis of Diarrhea Rate by Recombinant Murine RVs in a Suckling Mouse Model
To assess the capacity of the rescued viruses to induce diarrhea, we inoculated 5-dayold 129sv pups with 1 × 10 3 FFU of the recombinant murine RVs and 1 × 10 3 DD 50 of the highly virulent non-cell-culture-adapted murine RV EW strain as a control.We monitored the mice for 12 days to compare the percentage and duration of diarrhea occurrence.The wild-type, non-cell-culture-adapted murine EW strain caused 100% diarrhea in all inoculated pups from 2 to 9 days post-inoculation (Figure 2A).Compared with EW, rD6/2 was slightly attenuated and did not cause diarrhea in all the pups (Figure 2B), consistent with the previous literature [10].The new rEW/ETD-VP4 virus had a similar disease phenotype in that it caused diarrhea but not in all pups over time (Figure 2C), suggesting that ETD VP4 is not more virulent than RRV VP4.The three VP4 chimeric viruses (rEW/ETD-VP4-EW-VP8*, rEW/ETD-VP4-EW-VP5*-body, and rEW/ETD-VP4-EW-VP5*-foot) caused diarrhea in inoculated pups but did not show a diarrhea phenotype as robust as that by murine EW (Figure 2D-F).The data suggest that VP8* or the body or foot regions of VP5* from the EW strain did not individually increase the diarrheal rates compared with the parental virus with ETD-VP4 (rEW/ETD-VP4) in the suckling mouse model.
Comparative Analysis of Fecal RV Shedding by Recombinant Murine RVs in a Suckling Mouse Model
Next, we compared the amount of fecal RV shedding among the various VP4 constructs.Wild-type murine RV caused a curve with a single peak of more than 2.0 at OD450 at 4 days post-inoculation, demonstrating robust replication in the mouse intestine (Figure 3A).In contrast, the fecal RV shedding curve from the rD6/2-inoculated pups showed two peaks on days 2 and 6 post-inoculation, and the OD values did not reach as high as those of the EW strain (Figure 3B).The other four viruses that had the murine RV VP4 gene demonstrated three peaks on day 2, from day 5 to day 7, and from day 10 to 11 days postinoculation, and none of these viruses reached the high levels of fecal shedding as seen with the wild-type EW strain (Figure 3C-F).Statistical analysis of the fecal RV shedding between the recombinant viruses and the EW strain confirmed that none of the recombinant viruses were shed to the same level as that by the wild-type murine EW strain (Table 2).We also found that, compared with rEW/ETD-VP4, two of the three VP4 chimeras, i.e., rEW/ETD-VP4-EW-VP5*-body and rEW/ETD-VP4-EW-VP5*-foot, caused more fecal RV
Comparative Analysis of Fecal RV Shedding by Recombinant Murine RVs in a Suckling Mouse Model
Next, we compared the amount of fecal RV shedding among the various VP4 constructs.Wild-type murine RV caused a curve with a single peak of more than 2.0 at OD 450 at 4 days post-inoculation, demonstrating robust replication in the mouse intestine (Figure 3A).In contrast, the fecal RV shedding curve from the rD6/2-inoculated pups showed two peaks on days 2 and 6 post-inoculation, and the OD values did not reach as high as those of the EW strain (Figure 3B).The other four viruses that had the murine RV VP4 gene demonstrated three peaks on day 2, from day 5 to day 7, and from day 10 to 11 days post-inoculation, and none of these viruses reached the high levels of fecal shedding as seen with the wild-type EW strain (Figure 3C-F).Statistical analysis of the fecal RV shedding between the recombinant viruses and the EW strain confirmed that none of the recombinant viruses were shed to the same level as that by the wild-type murine EW strain (Table 2).We also found that, compared with rEW/ETD-VP4, two of the three VP4 chimeras, i.e., rEW/ETD-VP4-EW-VP5*-body and rEW/ETD-VP4-EW-VP5*-foot, caused more fecal RV shedding, whereas rEW/ETD-VP4-EW-VP8* did not (Table 2).These results suggest that, among the five different amino acids in VP4 between the EW and ETD_822 strains, amino acids in the VP5* region are positively associated with efficient replication in the mouse intestine.
Viruses 2024, 16, x FOR PEER REVIEW 8 of 12 shedding, whereas rEW/ETD-VP4-EW-VP8* did not (Table 2).These results suggest that, among the five different amino acids in VP4 between the EW and ETD_822 strains, amino acids in the VP5* region are positively associated with efficient replication in the mouse intestine.
Discussion
In this study, we leveraged an optimized reverse genetics system to improve the virulence of the murine RV rD6/2-2g strain by exchanging the remaining three gene segments from heterologous simian SA11 or RRV strains with its homologous murine counterparts and rescued a recombinant RV with 11 gene segments all derived from a murine RV strain (rEW/ETD-VP4).We previously attempted to rescue rD6/2 with rescue plasmids of gene segments 1 and 10 constructed by DNA synthesis.After constructing the plasmids, we performed reverse genetics with different clones and repeated this multiple times; however, none of the rescue experiments were successful.In the current study, we constructed the plasmids again by cloning the gene segments from the original D6/2 stock.Of note, the new plasmid sequences of the T7 promoter, RV cDNA, hepatitis delta virus ribozyme, and T7 terminator, although identical to the original plasmids, led to the successful rescue of rD6/2.It suggests that clonal differences might affect the rescue efficiency in reverse genetics.It is uncertain whether there is a difference in some other parts of the plasmid, and, if that is the case, whether this affects the reverse genetics results.Whole-plasmid sequencing of the plasmids would be helpful to examine whether there is any difference between the clones.It would be important to test multiple clones prepared separately when some rescue plasmids do not work, even if the plasmid has the correct sequence.
We replaced three gene segments, which encode the RNA-dependent RNA polymerase VP1 (encoded by gene segment 1), the cell attachment protein VP4 (encoded by gene segment 4), and the viral enterotoxin NSP4 (encoded by gene segment 10).Among these gene segments, gene segment 4 has been implicated in RV HRR; however, the contribution of gene segments 1 and 10 to HRR is less clear.VP1 interaction with VP2 is critical for transcription and genome replication [35].Group A RVs have 28 VP1 genotypes and 24 VP2 genotypes (Rotavirus Classification Working Group: RCWG updated on April 3rd 2023 (https://rega.kuleuven.be/cev/viralmetagenomics/virus-classification/rcwg)) [36,37], and it is reported that the combination of VP1 and VP2 genotypes changes the VP1 polymerase activity in some cases [38].RV NSP4 is an enterotoxin that increases host calcium levels in the cytoplasm and activates calcium-ion-dependent chloride channels, and it is directly involved in causing diarrhea [39].In light of the sequence differences between the EW and SA11 strains, we preferred using gene segments 1 and 10 originating from murine RV to specifically focus on studying viral replication, virulence, and spread of murine RV in a mouse model.
In our previous study, we compared the role of VP8* and VP5* from heterologous RV strains in virus replication and diarrhea in a suckling mouse model.We generated VP8* and VP5* chimeric viruses between homologous ETD and heterologous bovine UK strains on an rD6/2-2g background [14].The results showed that, in the case of comparison between homologous and heterologous VP4s, both VP8* and VP5* from ETD contributed to increased diarrhea in the suckling mouse model [14].In the present study, we evaluated the role of VP8* and VP5* from murine RV strains in a murine RV backbone.This is important because we are now testing VP4 in a genetic backbone identical to the homologous murine RV backbone, as opposed to the murine-like condition used in the previous study.Despite the different genetic background, we came to the same conclusion that ETD VP4 is not more virulent than RRV VP4, suggesting that when ETD VP4 is not available, RRV VP4 can serve as a robust surrogate for in vivo studies.To delineate the contributions of VP8* versus VP5*, we generated VP4 chimeric viruses between a non-tissue-culture-adapted EW strain and a tissue-culture-adapted ETD_822 strain and compared the role of VP8* and VP5* in a homologous murine RV strain.Of interest, VP4 chimeric viruses with VP5* body or foot regions, but not VP8*, slightly increased the amount of RV shedding in the feces compared with a control virus with ETD-VP4 (Figure 3C,E,F and Table 2).It is possible that the rEW/ETD-VP4-EW-VP5*-body and the rEW/ETD-VP4-EW-VP5*-foot replicate better than rEW/ETD-VP4 in MA104 cells.Previous studies on host factors involved with RV entry demonstrated that the VP8* domain of VP4 attaches to cell-surface glycans (e.g., sialic acid and histo-blood group antigens), while the VP5* domain interacts with other coreceptors (e.g., integrins and heat-shock cognate protein 70).Subsequently, VP5* likely plays a role in membrane penetration at a post-attachment step [31].Our current results suggest that the difference in VP4 between non-tissue-culture-adapted EW and cell-culture-adapted ETD_822 occurs after the initial virion attachment step with cell-surface glycans.Of note, none of the recombinant viruses caused the same severe diarrheal diseases as EDIM-EW did (Figure 2).These data suggest that multiple mutations in VP4 or other viral proteins are required for robust replication in the mouse intestine.
One can imagine that there are multiple avenues available to leverage this powerful murine RV system to identify and study the molecular factors that modulate the severity of diarrhea and viral replication.For example, it would be interesting to further passage these recombinant viruses in mouse intestines, determine the nucleotide differences by nextgeneration sequencing, and introduce the mutations into the rescue plasmids to pinpoint the precise amino acids important for more robust replication in the mouse intestine without losing the ability of the virus to replicate in cultured cells.It would also be of interest to test these viruses in an adult mouse model to see if different results are obtained to those in the neonatal mouse system.Finally, although human enteroid cultures have proven a great tool for modeling primary human intestinal epithelial cells and for studying RV infection [40][41][42], such a system is lacking for the murine enteroids, which would be useful for teasing apart the stage of entry affected by VP8* and/or VP5* mutations.In conclusion, we have developed a reverse genetics for murine RV.This system will provide a useful tool for understanding the biology of RV in mouse models.
Figure 1 .
Figure 1.Schematic presentation of the murine RV VP4 gene.(A) Schematic presentation of RV gene segment 4. The 5′ and 3′ UTRs are shown as black boxes.VP8* and the body and foot regions of the VP5* domain in the VP4 gene are shown in light blue boxes.The numbers above the box indicate the amino acid positions.(B) Schematic presentation of the murine RV ETD_822 and EW strains and the VP4 chimeric viruses generated in this study.The five amino acids that differ between the
Figure 1 .
Figure 1.Schematic presentation of the murine RV VP4 gene.(A) Schematic presentation of RV gene segment 4. The 5 ′ and 3 ′ UTRs are shown as black boxes.VP8* and the body and foot regions of the VP5* domain in the VP4 gene are shown in light blue boxes.The numbers above the box indicate the amino acid positions.(B) Schematic presentation of the murine RV ETD_822 and EW strains and the VP4 chimeric viruses generated in this study.The five amino acids that differ between the ETD_822 and EW strains are highlighted in red inside the blue boxes.The number above the box indicates the amino acid positions.
Figure 3 .
Figure 3. Fecal RV shedding by wild-type murine RV and recombinant murine RVs.Five-day-old 129sv pups were inoculated with the same doses and viruses as in Figure 2. (A) 1 × 10 3 DD50 of EW, or 1 × 10 3 FFU of (B) rD6/2, (C) rEW/ETD-VP4, (D) rEW/ETD-VP4-EW-VP8*, (E) rEW/ETD-VP4-ETD-VP5*-body, or (F) rEW/ETD-VP4-ETD-VP5*-foot. The amount of RV in the stool samples was determined by ELISA.Each dot shows data from one pup and the line shows the average score.The dotted lines indicate the score of the limit of detection determined from the stool of uninfected pups.
Figure 3 .
Figure 3. Fecal RV shedding by wild-type murine RV and recombinant murine RVs.Five-day-old 129sv pups were inoculated with the same doses and viruses as in Figure 2. (A) 1 × 10 3 DD 50 of EW, or 1 × 10 3 FFU of (B) rD6/2, (C) rEW/ETD-VP4, (D) rEW/ETD-VP4-EW-VP8*, (E) rEW/ETD-VP4-ETD-VP5*-body, or (F) rEW/ETD-VP4-ETD-VP5*-foot. The amount of RV in the stool samples was determined by ELISA.Each dot shows data from one pup and the line shows the average score.The dotted lines indicate the score of the limit of detection determined from the stool of uninfected pups.
Table 1 .
Gene constellation of the D6/2 and recombinant viruses.
Table 2 .
Summary of the statistical analysis of fecal RV shedding.1, 2
Table 2 .
Summary of the statistical analysis of fecal RV shedding1,2 .
|
2024-05-17T15:16:39.497Z
|
2024-05-01T00:00:00.000
|
{
"year": 2024,
"sha1": "9d2292366baa4d682f9c85d62cec7c3af1c21e5b",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1999-4915/16/5/767/pdf?version=1715506941",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "dd6b31697d92de1fc9bcf8bcb64b8c8fdc865667",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
119287094
|
pes2o/s2orc
|
v3-fos-license
|
Goos-H\"anchen shifts in frustrated total internal reflection studied with wave packet propagation
We have investigated that the Goos-H\"anchen (GH) shifts in frustrated total internal reflection (FTIR) studied with wave packet propagation. In the first-order approximation of the transmission coefficient, the GH shift is exactly the expression given by stationary phase method, thus saturates an asymptotic constant in two different ways depending on the angle of incidence. Taking account into the second-order approximation, the GH shift always depends on the width of the air gap due to the modification of the beam width. It is further shown that the GH shift with second-order correction increases with decreasing the beam width at the small incidence angles, while for the large incidence angles it reveals a strong decrease with decreasing the beam width. These phenomena offers the better understanding of the tunneling delay time in FTIR.
It is well known that a light beam totally reflected from an interface between two dielectric media undergoes lateral shift from the position predicted by geometrical optics [1]. This phenomenon was referred to as Goos-Hänchen (GH) effect and was theoretically explained by Artmann's stationary phase method [3] and Renard's energy flux method [4]. Because of the potentials applications in integrated optics [2], optical waveguide switch [5], and optical sensors [6,7], the GH shifts including other three non-specular effects such as angular deflection, focal shift and waist-width modification have been extensively investigated in partial reflection [8,9,10,11,12,13], attenuated total reflection [14,15], and frustrated total internal reflection (FTIR) [16,17,18,19,20].
From a somewhat different perspective, the optical tunneling phenomenon in FTIR have attracted much attention in the last two decades [21,22,23,24,25], because of the analogy between FITR and quantum tunneling. Theoretical [21,22] and experimental [23,24] investigations have demonstrated that the GH shifts in FTIR play an important role in the superluminal tunneling time and the well-known "Hartman effect" [26], which describes that the group delay for quantum particles tunneling though a potential barrier saturates to a constant for an opaque barrier. Recently, Martinez and Polatdemir [27] have studied the effect of width of the beam on the GH shift (which is proportional to tunneling time) to offer the complementary insights into the origin of "Hartman effect" in FTIR. In addition, Haibel et al. [19] once carried out a comprehensive study of * Email address: xchen@shu.edu.cn the GH shift in FTIR as a function of the polarization, beam width, and incidence angle in the microwave experiment, which challenges its theoretical descriptions. However, the current expressions of the GH shifts given by stationary phase method and energy flux method are independent of the beam width.
The main purpose of this Brief Report is to investigate that the GH shifts in FTIR by wave packet propagation. It is shown that the GH shift in the first-order approximation of the transmission coefficient is exactly the expression of the GH shift given by stationary phase method. The GH shift in this case approaches the saturation value in two different ways depending on the incidence angle. Taking account into the second-order approximation, the GH shift always depends on the width of air gap. It is further shown that the GH shift with the second-order correction become dependent strongly on the beam width. These phenomena offers the better understanding of the tunneling delay time in FTIR.
For simplicity, we consider TE polarized beam incident into the double-prism structure with the angular frequency ω and incidence angle θ 0 , as shown in Fig. 1, where a is the width of the air gap. Denote by ε, µ and n, respectively, the permittivity, permeability and refractive index of the prism. For a well-collimated beam, the electric field of the incident beam can be expressed by where k x = nk cos θ, k y = nk sin θ, k = ω/c, n = √ ǫµ, c is the speed of light in vacuum, θ is the incident angle of the plane wave component under consideration, and time dependence exp(−iωt) is implied and suppressed. For a Gaussian-shaped incident beam whose peak is assumed to be located at x = 0, its angular spectral distribution is also a Gaussian func- , around its central k y0 = k sin θ 0 , w y = w 0 / cos θ 0 , w 0 is the width of the beam at waist. According to Maxwell equations and boundary conditions, the field of the transmitted beam is found to be with the transmitted coefficient T = exp(iφ)/f is given by where κ = (k 2 y − k 2 ) 1/2 . Firstly, we look at the GH shift in the first-order approximation of the transmission coefficient. Expand the exponent in Taylor series at k y0 , and retain up to the first-order term, then we will obtain where T 0 = T (k y0 ), and d/dk y0 denotes the derivative with respect to k y evaluated at k y = k y0 . Introduce two real parameters L ′ t and L ′′ t defined as, then, in terms of the phase and magnitude of T , we will have and ln |T (k y )|.
Substituting Eq. (5) into Eq. (3) and employing the paraxial approximation condition, we obtain the transmitted beam at x = a, It is clear that the lateral shift L ′ t = −dφ/dk y0 is the same as the one obtained by the stationary phase method [3], and is given by where s c = ak y0 /κ 0 . When the width of the air gap is large enough, that is, a ≫ 1/κ, the GH shift will tends to a constant, With increasing the air gap the GH shift reaches a asymptotic constant, which is in agreement with the experimental results [19], and is also closely related to the counterintuitive "Hartman effect" of the tunneling delay time in the limit of an opaque barrier [23,24].
More interestingly, what we emphasized here is that the GH shift approaches the saturation value in two different ways depending on the angle of incidence. The GH shift (9) can be expressed by the following form: where g 0 = (k 2 x0 −κ 2 0 ) tanh κ 0 a/2k x0 κ 0 . Keeping the next to leading term for large a shows the approach to asymptotic value by It is clearly evident from the above expression that for κ 2 0 − k 2 x0 < 0 the GH shift increases monotonically to reach the saturation value, while for κ 2 0 − k 2 x0 > 0 it reaches the saturation value from above, that is, there is a hump before it attains saturation. Therefore, when the necessary condition for incident angle, is satisfied, the GH shift can approach the saturation Fig. 2 shows that for the large a the GH shift s p t is independent of the width a of the air gap hence it saturates a asymptotic constant, where λ = 32.8mm and n = 1.605 (corresponding to critical angle θ c = 38.5 • for total reflection and θ p = 56.4 • ) [19]. Furthermore, the GH shift approaches the asymptotic limit from above for θ 0 > θ p and from below for θ 0 < θ p . These phenomenon is not due to the interference time [27], and does result from the interference between the incident and reflected beams. Of course, it can also been seen from the relationship between the GH shift and group delay discussed in Ref. [28] that the delay time in FTIR also saturates to a constant from above for θ 0 > θ p [29] in the same way as that in the quantum tunneling for E < V 0 /2 [30,31], since the self-interference delay time that comes from the overlap of incident and reflected waves in front of barrier is of great importance [32].
Next, what as follows we will show the influence of the beam waist width on the GH shift in FTIR. To this end, we consider the exponent of the transmission coefficient is approximated to the second-order term, Introducing two new real parameters F ′ t and F ′′ t defined as Fig. 2. The solid corresponds to GH shift in the first-order approximation, the dashed and dotted curves correspond to the GH shifts in the second-order approximation.
then, with the phase and magnitude of T (k y ). we have Substituting expression (15) into Eq. (3) using paraxial condition (7), and neglecting some unimportant factors, we finally obtain the following field of the transmitted beam at x = a, where η t = L ′′ t /w ty , w tf = w 2 ty − iF ′ t 1/2 , and w 2 ty = w 2 y + F ′′ t correspond to the angular deflection, focal shift and waist-width modification, respectively [13]. The GH shift in this case can be expressed by, Obviously, the second term on the right-handed side of Eq. (18) is a second-order correction, which leads to the dependence of the GH shift on the beam width. In addition, it also results in the dependence of the GH shift on the width of air gap in the opaque barrier limit. Fig. 3 demonstrates that the GH shift in the secondorder approximation depends on the beam width, where (a) θ 0 = 45 • (b) θ 0 = 75 • , and other parameters are the same as in Fig. 2. Compared with Fig. 2 discussed above, the GH shift becomes dependent on the width a in the limit of an opaque barrier, due to the second correction. When the beam width is large, that is, the divergence angle becomes small, the correction to GH shift can be neglected, thus for a well-collimated beam the GH shift is in agreement with that given by the stationary phase method. More importantly, Fig. 3 shows that the GH shift increases with decreasing the beam width at θ 0 = 45 • , while the GH shift for θ 0 = 75 • shows a strong decrease with decreasing the beam width. As shown in Fig. 3 (a), the GH shift becomes dependent linearly on the width of air gap, because the Fourier components of the incident beam above the critical angle are strongly depressed so that the plane wave components just below the critical angle start to dominate. That is to say, when the incidence angle is larger than but close to the critical angle, the wave vector filter is more pronounced for a larger beam width, the transmission is essentially not tunneling at all, thus the GH shift increases with increasing the width a, as one expects classically. This also implies the violation of "Hartman effect" for the quantum tunneling in time domain [33].
Finally, we have a brief look at the microwave experiment on GH shifts. It was once argued [19] that the influences of beam width and incidence angle challenge the current descriptions of the GH shift in FTIR. Fig. 3 (a) shows the GH shift increases with deceasing beam dimension corresponding to the beam waist width, which is agreement with the experimental results in Ref. [19] where the physical parameters are the same as those in Fig. 2. In addition, it is also predicted in Fig. 3 (b) that the GH shift will decreases with deceasing beam waist, when the incidence angle is away from the critical an-gle. In a word, the improved formula of GH shift (18) with the modification of beam width can give better understanding of the GH shift in FTIR theoretically and experimentally.
To summary, we have investigated the GH shifts in FTIR by wave packet propagation. It is found that the GH shift in the first-order approximation of the transmission coefficient, which is exactly the expression of the GH shift obtained by stationary phase method, approaches the saturation value in two different ways depending on the angle of incidence. The explicit expression of the GH shift in the second-order approximation shows the strong dependence on the beam width. It is further shown that the GH shift with the second-order correction increases with decreasing the beam width at the small incident angles, while for the large incident angles the GH shift reveals a decrease with decreasing the beam width. All these theoretical results can be applicable to explain the experiment on GH shifts [19] and offer a hint to the better understanding of tunneling delay time in FTIR [28].
This work was supported in part by the National Natural Science Foundation of China (60806041, 60877055), the Shanghai Rising-Star Program (08QA14030), the Science and Technology Commission of Shanghai Municipal (08JC14097), the Shanghai Educational Development Foundation (2007CG52), and the Shanghai Leading Academic Discipline Program (S30105). X. Chen is also supported by Programme Juan de la Cierva of Spanish Ministry of Science and Innovation.
|
2009-04-16T12:44:43.000Z
|
2009-04-16T00:00:00.000
|
{
"year": 2009,
"sha1": "7f4b86476883b76ba8657667cbac625db28073a4",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/0904.2478",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "7f4b86476883b76ba8657667cbac625db28073a4",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
256790495
|
pes2o/s2orc
|
v3-fos-license
|
Factors Affecting Climate Change Governance in Addis Ababa City, Ethiopia
: Climate change in Ethiopia’s capital city of Addis Ababa is characterized by an increase in rainfall and subsequent flooding and severe temperature with more heat waves. The city government has now recognized climate change as a serious threat, including it being a reason for loss of life and livelihoods. Even though governance has become a key mechanism to address a reduction in greenhouse-gas emissions and vulnerability to climate change, the practice of climate-change governance has been undermined by different factors. Thus, this study examined factors affecting climate-change governance in the city. The research adopted a mixed research design and depends on primary and secondary data sources. The binary logistic regression model and descriptive statistics were both used to analyse the quantitative data, while the descriptive method was used for the qualitative data. The results reveal that a lack of coordination, political will and leadership are the major factors that hinder the practice of governance in the city, followed by inadequate finance, policy, strategy, and regulation. In addition, a shortage of knowledgeable experts, lack of access to information and technologies had their own contributions to the ineffectiveness of climate-change governance. Thus, the city administration should place emphasis on climate change, giving it comparable weight to other crosscutting issues, and enabling the functioning of the steering committee with a strong accountability system. In addition, the city administration should take aggressive measures, including revising or formulating new policy, strategy or regulation, and even creating an independent institution for climate-change issues. Furthermore, the Addis Ababa City environmental protection and green development commission should create an enabling environment to attract non-state actors, in general, and NGOs, in particular, and should assign one directorate to mobilise finance, following the approach taken by the federal environmental protection commission. The commission should implement a mechanism to efficiently utilize the budget by applying continuous monitoring and evaluation. The commission should also provide continuous training and capacity building for leaders and experts at sub-city and Woreda levels.
Introduction
Climate change is one of the most contested and undeniable environmental issues, and has been receiving significant attention around the world.It manifests as rising temperatures and increasingly erratic rainfall, as well as severe floods and droughts [1,2].Climate change is no longer a low-level issue but has become a life-threatening global emergency [2,3].According to a study by IPCC [2,4], temperatures are predicted to rise by 2.4 • C by the year 2100.This is significantly above the target value of 1.5 • C, which was accepted by the Paris agreement.The effects of this increment are likely to be disastrous in the future.
All economic sectors are affected by climate change, which also poses different challenges for environmental systems [5,6].These challenges are more pronounced in cities, since most of the world's people resides there.Currently, cities are significantly affected by consequences of climate change such as heat waves, flooding, heavy rains and storms [7,8].At the same time, cities produce around two-thirds of the total global greenhouse-gas (GHG) emissions, and account for a similar proportion of total global energy consumption [7,9].
During the world leaders' meeting of the 26th annual summit (COP26) held at Glasgow in 2021, the assessment of past performance revealed that the targets of a reduction in GHG emission had not been achieved [10].While around 23 countries signed the COP26 coal-toclean-power transition agreement, the largest coal producers, including Australia, China, India and United States, were missing from the agreement.A total of 105 nations signed an agreement to minimize the source of methane, but the agreement was not signed by the top three methane producing countries (China, India and Russia), which are responsible for about 35% of the methane in the atmosphere [10].Studies in climate modeling show an urgent policy responses is needed (2,4) but most countries' governmental climate action in place today aims to achieve a gradual reduction in GHG emissions [10][11][12].
Climate change is a global threat which requires policy action at international, national, and local levels of governance.Climate-change governance refers to a range of initiatives, regulations, and government decisions aiming to establish cooperation between state and non-state actors in dealing with climate change [13,14].It is a subset of the broader governance field, but the difference is that a greater emphasis is placed on the mitigation and adaptation of climate change [15,16].In environmental terms, climate-change governance is the mechanisms and response measures aimed at steering social systems towards preventing, mitigating, and adapting to the risks posed by climate change [11,17,18].Climate-change governance in cities is manifested by the process of the formulation and implementation of adaptation and mitigation measures [14,19].Cities are rapidly becoming key locales for climate-change governance, through the designing of institutions and infrastructures that drive decarbonization and adaptation to the changing climatic conditions [7,[20][21][22][23].The first successful international negotiation of the 2015 Paris agreement marked a milestone in global climate governance [24].However, the practicality of the agreement was questioned, particularly due to the withdrawal of the United States, the world's second-largest emitter of GHGs [23,25].The COVID-19 pandemic also caused significant disruption to the response to climate change in cities and created a great challenge to meet the global goals defined in the Paris agreement [3,10,25,26].
To this end, several empirical studies show that the effectiveness of climate-change governance is hindered by a number of factors.One of the main determinants of responses to mitigation and adaptation is effective policy, strategy, regulation and law [27,28].Conflicts of interest during issue framing or giving priority to mitigation and adaptation in relation to other policy concerns, such as infrastructure provision or poverty reduction, are the major factors that hindered climate-change governance [29][30][31].A lack of implementation of policy, strategy, rules, plans and inadequate legislation are also part of the factors that determine climate-change response [24,32].
Several studies indicated that a shortage of finance is another factor for the implementation of climate-change response measures [27,29,[32][33][34].Mostly, local governments or municipal authorities face a shortage of finance to implement mitigation and adaptation because of the existence of many competing issues on urban agendas [27,32,35].According to Aylett [29], cities face three major resource-related challenges for an effective response to climate change, including access to financial, human and technological resources.Lack of human resources is also a major challenge of climate-change governance in cities [17,29,36].Governance capacity to respond to climate change is also affected by legal frameworks and legitimate institutions [37].Lack of an independent institution that is directly accountable for climate-change matters is also a factor that determines the governance of climate change, especially at a local level [20,38].
Weak coordination of actors and sectors are also key factors that hinder climate-change governance [11,32,33,37,39,40].Most countries' local governments or municipalities lack cooperation with academia, the private sectors, the community, and NGOs [34] and lack vertical coordination between national and local levels, which is important to devise solutions to governance problems at the local level [3].
Another factor that hinders climate-change governance in cities is access to updated sources of data, including future climate predictions, GHG inventories, and climate vulnerability assessments and impacts such as heat waves and flood [34].Information accessibility and availability can improve decision-making skills by assisting decision-makers in assessing and prioritizing climate change [27,41].Building a solid foundation for effective urban climate-change governance requires scientific data [37].
The political willingness of leaders and leadership is another major factor that hinders climate-change response [34].Lack of political will is a challenge the collaborative governance of the climate-change response in most cities [38,42].Leadership quality is also critical in shaping climate-change responses [32,37,43].Climate-change action is also affected by a lack of technology that is needed to take action on climate-change issues [3,32].
When we look at African cities, the coordination of actors is a major factor that hinders good climate-change governance [5,44,45].In African cities, collaborations regarding climate-change response between the local government and government departments at different levels and sectors, civil society, residents, and community-based organizations are weak [35,[46][47][48].A study conducted in two cities of Africa, Karonga from Malawi and Dar es Salaam from Tanzania, found that climate-change governance is hindered by poor collaboration among governments, the private sector and civil-society organizations (CSOs) [47].A study conducted in Lusaka, Zambia and Durban city, South Africa, indicated that there is a lack of finance and capacity problems for executing policies, strategies and plans at all levels of government.This is more pronounced at the local government level [48].Lacking political willingness and poor leadership are other problems for implementing an effective climate-change response in African cities [45].
When it comes to Ethiopia, an African country, one of the factors that determines climate-change response is lack of coordination among actors [49].The country accepts and implements the New Urban Agenda [50] and Sustainable Development Goal [51] that are necessary for the development of an industrial base to create employment in urban areas.However, climate-change response still ranks low on the list of overall development priorities [50,52].According to a study conducted by Climate Action Tracker [53], Ethiopia showed less concern about climate-change action compared to other countries, such as Kenya and South Africa.However, without an effective response to climate change, sustainable development can not be achieved [23].The Climate-Resilient and Green Economy (CRGE)'s Strategy implementation and evaluation results show that the strategy was not effective as it lacked political commitment at local and national levels, as well as in multiple sectors [52,54].In the country, limited numbers of non-state actors were involved in the governance of climate change [53,55].In addition, security concerns and the pandemic have adversely hampered the implementation of climate action [49,56].
Addis Ababa City is highly affected by climate change, such as flooding, drought, heat waves and land slides [57][58][59][60][61].In the city, climate change and its impacts are aggravated by an unprecedented rate of urbanization and rapid population growth, built-up-area expansion, less green-area coverage and land use change [57,[62][63][64][65].To govern climate change in the city, the Addis Ababa City Environmental Protection and Green Development Commission (AAEPGDC) was awarded a mandate and will implement a climate-resilient green growth strategy for 10 years until the year 2025, based on the 2011 country's CRGE strategy [66].The strategy addresses both climate-change adaptation and mitigation issues, and began in 2014.
Adapting this strategy, the AAEPGDC has made climate-change issues mainstream across various offices: land use, housing, transportation, water supply, solid waste, education, energy and more than 22 other sectors [67][68][69].Even though the strategy is in place, the implementation of the strategy is still piecemeal and climate-change response action is given a low priority compared to other issues.
The provision of empirical information on the major factors that hinder climate-change governance using a comprehensive study is vital to city administrators at different levels and other non-state actors.This is important for redesigning sound policies and strategies to address climate-change impacts and a reduction in GHG in the city.However, most of the previous studies conducted related to climate change in the city were mainly focused on trends, vulnerabilities and impacts [57,[59][60][61][62]64,[70][71][72][73].None of them focused on climate-change governance by considering governance factors.Therefore, it is important to ask the question: what are the major factors that hinder climate-change governance in Addis Ababa city?To provide a concrete response to this query, analysing and conducting empirical research using both quantitative and qualitative method is essential.Numerous international scientific research works have been carried out in this area.However, those studies concentrated on comparative study in cities of industrialized countries, neglecting to do so in developing cities.Hence, this research intends to bridge this gap by identifying factors that hinder climate-change adaptation and mitigation response action.The study's findings can be applicable for other African cities facing comparable difficulties.
Methodology 2.1. Theoretical Gap of the Research
In the past, state-centred theories were applicable, where governments acted as leading planners, regulators, and policy formulator and implementer [74].Starting from the 1970s, several new issues emerged, including the stronger influence of local governments and transnational institutions, globalization, the deregulation of the financial market, and others.In response to these pressures, national governments began to explore new directions, which involves depending on horizontal connections and collaboration across privatepublic divides.This brings new ways of formulating and implementing public policy, which is described by the concept of governance [18,75].Governance implies that national governments have shared authority over the formulation and implementation of public policy with local government agencies, private actors, NGOs, transnational organizations, and citizen groups [76].
Climate-change governance is a subset of the broader notion of governance.However, the difference is that it places more emphasis on the mechanism of coordinating different social actors in order to prevent, mitigate, and adapt to the threats posed by urban climate change [14].The key theoretical argument of climate-change governance is that all actors are responsible for addressing the climate-change-related issues of cites in multi-level systems [77].Global actors are increasingly in agreement that effective climate-change governance has a long-term impact on climate efforts.Hence, in general, good climate-change governance is often indicated by the effectiveness with which the climate-governance actions realize the objective of a reduction in GHG and risks [78].
The 1990s were considered a turning point for climate-change response because of the increased awareness of the difficulties caused by urban GHG emissions [77,79].The foundation of the current system of global governance is the UN Framework Convention on Climate Change (FCCC), the Kyoto Protocol, and the 2015 Paris Agreement to reduce greenhouse gases and the release of highly toxic persistence organic compounds [80].Various theoretical concepts that underpin the governance of environmental issues, such as climate change, have emerged to minimize the impact of climate change, such as network theory, urban regime theories, green growth cities, sustainability cities, smart cities, and new urbanism [81][82][83][84][85][86][87].
However, cities in both developed and developing countries still face challenges to govern climate change effectively.For many cities in the world, housing provision, sanitation and waste disposal are the most essential issues for governance [21].Across the world, there still is high levels of policy rhetoric about urban climate governance, but the practice on the ground is limited [22,37].Research on the development of urban climate policy and governance began in the mid-1990s and focused on single case studies, predominantly on cities in the United States, Canada, Europe, and Australia.Although some research has more recently been conducted in Asia, South Africa, and Latin America, [14], the information from cities in developing nations is still fragmentary [22].Thus, the provision of empirical information on the major factors that hinder climate-change governance using a comprehensive study is vital to city administrators and other non-state actors for redesigning sound policies and strategies in cities for addressing climate-change impacts and a reduction in GHG in developing countries.Thus, this research is intended to contribute to bridging this gap by identifying the factors and applying a mixed-methods approach.
Conceptual Framework
The motivation for the need for climate-change governance includes the rapid increase in population and economic growth, transportation, and the increase in GHG emissions [9].Now, and going forward, cities have also become the home of a major section of the population and its economic activities, which makes them particularly vulnerable to climatechange impacts.A conceptual framework is primarily designed to guide the research work, which shows the interactions between and among variables.In this regard, the central part of the framework is urban climate-change governance.Based on the empirical and theoretical literature, urban climate-change governance requires the multiple interactions of major urban actors, which include private businesses sectors, public agencies, and civil-society organizations.
City governments create partnerships with civil society and private sectors to govern cities in a sustainable manner.Especially in developing countries, governments alone cannot provide adequate and quality infrastructure and services for residents because of the fast rate of urbanization and low level of economic development.Local governments create an enabling environment; they empower and assign clear roles and responsibilities to civil society and the private sector for the formulation and implementation of a city's climate-change responses [29].The relationship among governments and civil society; governments and the private sector; civil society and the private sector; and the interaction of the three: governments, the private sector, and civil society, need to formulate and implement both mitigation and adaptation measures.
A relationship between public and private actors, for the purpose of emission reduction or in the formulation and implementation of adaptation and mitigation strategies, such as involving private actors in energy-saving and emission-reduction schemes, transportation, provision of renewable energy, waste management and the mobilization of resources, are crucial in cities. Governments also interact with actors from civil-society organizations, for example, setting climate-change agendas, collecting expert opinions, developing policy directions, engaging in mitigation and adaptation actions, and engaging in public-awareness programs.State actors working with environmental issues together with civil society organizations can gain the advantages of achieving closer contact with grassroots movement and communities [20].In addition, the relationship of the private sector and civil-society organizations through sponsorships, consultation or an exchange of ideas, joint research or development, or the promotion of new products and new markets is important for non-state actors themselves and for the government to implement policies and strategies effectively [88].
Climate-change governance, being a rapidly increasing research agenda among academics and development partners, has generated discussions concerning which variables to employ as factors that determinant the effectiveness of the governance of climate-change responses.Based on the theoretical and empirical literature, seven variables were selected to analyse the factors that hindered governance related to climate change, including 30 specific questions.The factors are adapted from many scholars.These factors include policies, strategies and regulations; finance; human resource; technologies; political willingness and leadership; information; and coordination [7,13,14,[20][21][22]27,29,33,37,77].
Study Area Description
Addis Ababa, the capital city of Ethiopia, is geographically located in the central part of the country, surrounded by the Oromiya region (Figure 1).Specifically, it is located at 9 • 1 48 N latitude and 38 • 44 24 E longitude.The city has a total size of 540 square kilometres [89].Its altitude ranges from 2100 m, in Akaki, in the south part, to more than 3000 m above sea level, in the Entoto Mountain, in the north part [90,91].The administrative hierarchy in the city is composed of three levels: the top level, known as the city administration; a middle level, known as the sub-city; and the lowest level, known as the Woreda level.Currently, the city is divided in to eleven sub-cities and 120 Woredas [92].The sub-cities include Gulele, Yeka, Lemikura, Kirkos, Akaki, Arada, Bole, Lideta, Addis Ketema, Nifas Silk Lafto and Kolfe Keraniyo (Figure 1).The topography of the city varies, especially between its northern and southern parts.The altitude and slope decrease from north-to-south direction [89].[7,13,14,[20][21][22]27,29,33,37,77].
Study Area Description
Addis Ababa, the capital city of Ethiopia, is geographically located in the central part of the country, surrounded by the Oromiya region (Figure 1).Specifically, it is located at 9°1′48′′ N latitude and 38°44′24′′ E longitude.The city has a total size of 540 square kilometres [89].Its altitude ranges from 2100 m, in Akaki, in the south part, to more than 3000 m above sea level, in the Entoto Mountain, in the north part [90,91].The administrative hierarchy in the city is composed of three levels: the top level, known as the city administration; a middle level, known as the sub-city; and the lowest level, known as the Woreda level.Currently, the city is divided in to eleven sub-cities and 120 Woredas [92].The sub-cities include Gulele, Yeka, Lemikura, Kirkos, Akaki, Arada, Bole, Lideta, Addis Ketema, Nifas Silk Lafto and Kolfe Keraniyo (Figure 1).The topography of the city varies, especially between its northern and southern parts.The altitude and slope decrease from north-to-south direction [89].Addis Ababa is the primate city, which dominates the political, economic, and historical issues of the nation.It is the capital of the federal government, and it is also the headquarters of the African Union [90].As the last census in Ethiopia was carried out in 2007, the current population of the city is based on estimation.There are several estimates about the population of the city in different sources.However, the national central statistical agency that carries out national census projections is the appropriate source.When considering the trend in the city's population, in the year of 2007, the population was 2,739,551 [93] with 22.77% of the 11.86 million people living in urban areas of the country [94].In 2015, the population was around 3.3 million; whereas, currently, the estimated population is around 4 million [91].The population is projected to reach about 6 million in 2030 [94] (Figure 2).about the population of the city in different sources.However, the national central statistical agency that carries out national census projections is the appropriate source.When considering the trend in the city's population, in the year of 2007, the population was 2,739,551 [93] with 22.77% of the 11.86 million people living in urban areas of the country [94].In 2015, the population was around 3.3 million; whereas, currently, the estimated population is around 4 million [91].The population is projected to reach about 6 million in 2030 [94] (Figure 2).
Climate-Change Impacts and Responses in Addis Ababa City
Climate change in Addis Ababa is manifested by an increase in rainfall and subsequent flooding and severe temperature, with more heat-wave occurrences [57,[59][60][61]64,98].The major direct impacts of climate change in the city are flooding, drought and urban heat island (UHI) [99] (Figure 3).
Climate-Change Impacts and Responses in Addis Ababa City
Climate change in Addis Ababa is manifested by an increase in rainfall and subsequent flooding and severe temperature, with more heat-wave occurrences [57,[59][60][61]64,98].The major direct impacts of climate change in the city are flooding, drought and urban heat island (UHI) [99] (Figure 3).
Addis Ababa is the primate city, which dominates the political, economic, and historical issues of the nation.It is the capital of the federal government, and it is also the headquarters of the African Union [90].As the last census in Ethiopia was carried out in 2007, the current population of the city is based on estimation.There are several estimates about the population of the city in different sources.However, the national central statistical agency that carries out national census projections is the appropriate source.When considering the trend in the city's population, in the year of 2007, the population was 2,739,551 [93] with 22.77% of the 11.86 million people living in urban areas of the country [94].In 2015, the population was around 3.3 million; whereas, currently, the estimated population is around 4 million [91].The population is projected to reach about 6 million in 2030 [94] (Figure 2).
Climate-Change Impacts and Responses in Addis Ababa City
Climate change in Addis Ababa is manifested by an increase in rainfall and subsequent flooding and severe temperature, with more heat-wave occurrences [57,[59][60][61]64,98].The major direct impacts of climate change in the city are flooding, drought and urban heat island (UHI) [99] (Figure 3).Addis Ababa is more vulnerable to the impacts of climate change in terms of extreme rainfall which causes flood [59,61,65,71].A significant increase in city flooding is evident due to the rapid urbanization, loss of green areas, poor drainage systems and climate change [72].There were 89 flood-related hazards in total between 2013 and 2018, with a particularly substantial increase between 2017 and 2018 [65].Floods have caused losses of human life and harm to infrastructure and property [65,71,72,100].More irregular heavyrainfall events are expected to occur in the future and this is likely to result in worsening flooding conditions in the city [62][63][64].The following figures show effect of floods, causing of damage to different infrastructures, including residential, commercial, roads, and water systems and the disruption of traffic and loss of property and human lives (Figure 4).losses of human life and harm to infrastructure and property [65,71,72,100].More irre ular heavy-rainfall events are expected to occur in the future and this is likely to result worsening flooding conditions in the city [62][63][64].The following figures show effect floods, causing of damage to different infrastructures, including residential, commerci roads, and water systems and the disruption of traffic and loss of property and hum lives (Figure 4).In addition to flooding, drought is another impact of climate change and it affe the quantity and quality of water, the health and wellbeing of Addis Ababa's dwell [57,58,65].In recent years, the city has already been feeling the pressure of unprecedent drought because of reductions in seasonal rainfall, reductions in river flows, reductio in inflow into reservoirs, falling groundwater tables, and increased temperatures, whi in turn, increase evapotranspiration from the reservoirs [58,64,65].In addition to flooding, drought is another impact of climate change and it affects the quantity and quality of water, the health and wellbeing of Addis Ababa's dwellers [57,58,65].In recent years, the city has already been feeling the pressure of unprecedented drought because of reductions in seasonal rainfall, reductions in river flows, reductions in inflow into reservoirs, falling groundwater tables, and increased temperatures, which, in turn, increase evapotranspiration from the reservoirs [58,64,65].
Overheating and the UHI effect is also a major consequence of climate change in the city.Overheating or heat waves, occurring in extreme hot days and nights, can have a substantial impact on health heat stress, on air pollution, and on water and energy supply and infrastructures [65,86].The ways the city grows and develops are both key drivers of climate change and its impacts.Besides the emission of greenhouse gases from different sectors, and unprecedented rate of urbanization and rapid population growth, built-up-area expansion, less green-area coverage, and land use changes have the most anthropogenic influence on climate change [57,58,64,101].The impacts are also exacerbated by a lack of consideration of climate-sensitive issues in urban planning [63].
Starting from 2014, the AAEPGDC prepared a climate-change-resilient green development strategy to protect and enhance the quality of life of its residents.As shown in Figure 5a the heart of strategic responses to climate change in the city being practiced in buildings, transport, energy, waste, industry, urban agriculture, land use change, forestry and other sectors.Furthermore, they aimed to formulate a strategy on how to reduce the emission of GHGs from various urban system components of Addis Ababa.The strategy offers a structure to help partners collaborate more effectively and efficiently to carry out adaptation and mitigation measures [99].
communities and economies are built.Adaptation combines risk management, economic activity adjustment, infrastructure modifications, and changes in community needs.A fundamental problem for decision makers is determining priorities and appropriate activities to fit the dynamics of the city and lessen anticipated local climate-change impacts in Addis Ababa city.The key to effective adaptation that shields communities from the effects of climate change is a locally relevant, cogent, and multidisciplinary response strategy that works across government and community.An effective adaptation plan needs to reveal the anticipated local impacts of climate change and to build resilience when dealing with the city's vulnerabilities.Adaptation efforts in the city can offer co-benefits for climate-change mitigation and for local economic development.The climate-change-resilient green growth strategy in Addis Ababa addresses both climate-change adaptation and mitigation issues, as shown in Figure 5a,b [99].Climate-change governance actions started well after the formulation of a strategy.The commission was appointed to oversee issues related to the city's environment and climate change.Starting from 2015, major climate actions undertaken in the city include: tree planting, encouraging community-level adaptation, and the expansion of the light-rail transit network.These actions were taken to minimize climate-change risk and emissions reduction.The city has implemented car-free days to promote walking and cycling.The car-free day is held every month, aiming to make attitudinal change in the long run.Smart parking is also another instrument to improve traffic flow and GHG emissions reduction [65].
Currently, climate-change governance is being practised in the city to address both climate-change adaptation and mitigation issues.During the planning process for climate action, the city determined 20 priority adaption measures and 14 mitigation initiatives.The city developed a climate action plan in 2020, which was started in 2017 by the C40 Cities Climate Leadership Group.Addis Ababa joined C40 as a member of the program and pledged to achieve net-zero GHG emissions by 2050.
Sampling Methods
The data for this research was gathered from employees drawn from three administrative levels of AAEPGDC.These levels are city level, sub-city level (10 sub-cities; currently, after data collection, one sub city was added and the number of sub cities now is 11), and Woreda level (a total of 20 Woredas, with 2 Woredas randomly selected from each sub-city).We included Woreda 7 and 8 from Gulele sub-city, 5 and 6 from Yeka, 8 and 9 from Kirkos, 5 and 8 from Akaki, 1 and 10 from Arada, 7 and 6 from Bole, 1 and 10 from Lideta, 5 and 8 from Addis Ketema, 1 and 12 from Nifas Silk Lafto, and 6 and 7 from Kolfe Keraniyo.
We purposively consulted climate-change and pollution experts to gather useful information due to the existence of various directorates and departments, such as green area development, forest management, natural-resource management, climate change and pollution, and others.Finally, because of the small number of respondents, a total number of employees were selected.As mentioned in Table 1, the questionnaires were distributed among 232 experts with 219 of them responding to our questionnaires, having a response rate of 95%.In order to gather in-depth information for the research in numerous directions, purposive sampling methods were also used.These included sampling from government officials at various levels, inhabitants, private-sector representatives, and the leaders of CSO.A total of 45 respondents were selected for an in-depth interview, chosen from different actors and sectors.
Data Collection Methods
Both primary and secondary data sources were employed in this study, to gather both quantitative and qualitative information.A questionnaire with five-point Likert scale questions was used to gather the quantitative data.A total of 232 specialists from AAEPGDC office at the city, sub-city, and Woreda levels filled in the questionnaire.The questions were designed to generate data regarding the governance of climate-change determinant factors.A questionnaire with 30 items was created and distributed in paper form based on seven variables: policies, strategies and regulations, finance, human resource, technologies, political willingness and leadership, information, and coordination.Some of the questions included in the questionnaire are provided next.How do you rate political willingness of leaders on climate change governance in the city?How do you rate coordination's of actors in climate change response?To what extent do you believe the current human power can implement the intended climate change governance?How do you rate the adequacy of finance for implementation of plans of climate change issues?How do you rate enforcement of strategies, laws/regulations in climate change actions?How do you rate access to necessary technology for climate planning and implementation?The questionnaires data were collected from 1 March to 30 March, in 2021.
In addition, interviews with professionals from different sectors and actors were conducted to substantiate the data collected via the questionnaire.The interview questions were prepared depending on the sectors and activities of the actors to be interviewed.Some of the questions included in the interviews are provided next.How do you describe the trend of climate change in Addis Ababa city?What were the major causes and adverse effects of the changing climate in the city?What are the measures taken by your organization to tackle climate change?What legislations, laws and standards exist to address climate change (specifically GHG)?What procedures are used to implement these mechanisms?What are the factors that impede effective climate change governance in the city?What do you suggest for effective climate change governance in the city?The interviews were conducted from March 2021 to October, in 2022.Finally, 45 professionals were asked to order the ten major factors in terms of their preference.Observations were conducted in city targeting waste-to-energy project, green areas, smart car parking, flood vulnerability and affected sites.A review of secondary data, including books, journal, strategies, regulations, plans, reports, and others related to the topic were also synthesized to produce this research.
Data Analysis Techniques
For this study, binary logit model was used to analyse the quantitative data.We used binary logistic regression model because the dependent variable is dichotomous: in this case, ineffective or effective.Where the dependent variable is dummy, binary logit model is suggested [102,103].Hence, binary logistic model was used to determine the relationship between climate-change-governance effectiveness and the related underlying factors or independent variables, including lack of policies, strategies, and regulation; lack of finance; lack of human resource; lack of technologies; lack of political will and leadership; lack of information, and lack of coordination.The dependent variable was coded with a value of 0 for ineffective and 1 for effective; whereas, the independent variables was designated with 1 as low, 2 as moderate, and 3 as high, in the coding system.SPSS software version 26.0 was used to analysis the binary logit model by creating the high-scale level as a reference category for the independent variables.
Before applying the binary logistic regression, the logit model was evaluated for possible inadequacies.To assess the model's overall fit, a Hosmer and Lemeshow test was performed.The chi-square value for this test is = 2.791, sig = 0.947.This result shows that the model sufficiently fits the data (Table 2).How well the model categorizes the observed data is another approach to establish the model's effectiveness.Table 3 shows that, overall, 87.2% of climate-change-governance effectiveness was predicted properly.The independent/covariate variables suggest that climate-change governance is ineffective (95%).The model summary, shown in Table 4, also highlights the goodness of the model.The result reveals that 62.8% of the variance in climate-change-governance effectiveness can be explained by a linear combination of the seven independent variables (coordination; political will and leadership; policy, strategies, regulations; finance; human resource; information and technologies).Based on the results shown in Tables 2-4, we come to the conclusion that the model, along with the given independent variables, is acceptable.In addition to the binary logit model, descriptive statistics were applied for data analysis.A total of 30 questions were prepared, distributed, then later computed and recoded into seven variables, including lack of policies, strategies and regulations; finance; human resource; technologies; political will and leadership; information and coordination.The average responses from all respondents to all the questions that reflect each variable were used to discuss the findings.Each variable is represented and addressed by a distinct question.The qualitative data were also repeatedly read, coded, and similarities between the data were identified using N'Vivo (10.1).The findings from qualitative studies were analysed using a thematic area approach.
Results of the Descriptive Statistics
The study results were collected based on seven independent variables: lack of policies, strategies, and regulation; lack of finance; lack of human resource; lack of technologies; lack of political will and leadership; lack of information and lack of coordination.The results are summarized in Table 5. Regarding lack of coordination, the majority of respondents (64.4%) consider that it highly affects climate-change governance, while 22.8% and 12.8% of respondents consider its effect to be moderate and low, respectively.Lack of political willingness and leadership quality is another major factor that hinders climate-change governance in the city.This factor is characterized as high by 60.7% of respondents.The table below also shows that lack of finance and policies, strategies, and regulation significantly affect climate change governance: both are characterized as high by 53% of respondents.Table 5 demonstrates that many respondents (71.2%) responded that a lack of human resources has a moderate impact on climate-change governance.The remaining number of expert's is divided between the relatively low factor (12.3%) and high factor (16.4%).As shown in Table 5, a high number of respondents characterized lack of information and technologies as moderately affecting climate-change governance, with percentage values of 49.3% and 44.3%, respectively.
Binary Logistic Regression Results
Seven independent variables were entered into a binary logistic regression model in order to pinpoint the major factors that hinder the effectiveness of climate-change governance.These variables are coordination; political will and leadership; finance; policy, strategies and regulation; human resources; information and technologies.
As shown in Table 6, the results of logistic regression reveal that coordination; political will and leadership; finance; and policy, strategies and regulation significantly affect climate-change governance at the 5% level of significance.Exp (B) gives the odds ratios for each variable.As shown in the table, a log odd of climate-change governance effectiveness is positively related to coordination; political will and leadership; finance; and policy, strategies and regulations.Hence, climate-change-governance effectiveness with the existence of lower problems in coordination; political will and leadership; finance; policies, strategies and regulations is 66. 861, 5.372, 5.673 and 3.379 times, respectively, more likely to have an effect than that of higher problems in coordination; political will and leadership; finance; and policy, strategies and regulations.Similarly, Table 6 shows that climate-change-governance effectiveness with moderate problems of coordination; political will and leadership; finance; policy, strategies and regulations are 3.398, 4.228, 1.967 and 4.593 times more likely to have an effect than that of high problems of with these factors, respectively.Hence, the above four variables (lack of coordination; political will and leadership; finance; and policy, strategies and regulations) have a highly significant effect on climate-change-governance effectiveness.
Interview Results
The quantitative result discussed above was also supported by our qualitative analysis.By using several interview questions, we collected qualitative responses from officials and experts from federal, city, sub-city and Woreda levels; private sectors and NGOs.The result was summarised in Figure 6.As shown in the figure, the majority of interviewees repeatedly answer that the major constraints of climate-change governance in the city were weak enforcement of laws/regulations; lack of political willingness of officials and weak horizontal interaction of stakeholders.Lack of finance, accountability, and leadership are also part of the factors that hinder climate-change governance.
Interview Results
The quantitative result discussed above was also supported by our qualitative analysis.By using several interview questions, we collected qualitative responses from officials and experts from federal, city, sub-city and Woreda levels; private sectors and NGOs.The result was summarised in Figure 6.As shown in the figure, the majority of interviewees repeatedly answer that the major constraints of climate-change governance in the city were weak enforcement of laws/regulations; lack of political willingness of officials and weak horizontal interaction of stakeholders.Lack of finance, accountability, and leadership are also part of the factors that hinder climate-change governance.
In Figure 6, we provide a summary of the top ten factors that hinder climate-change response in the city, gathered from the aforementioned interview response.Note that the figure only shows the number one responses, i.e., the most important ones according to the respondents.To provide additional context to the results presented above, we provide a few of the interview responses as follows.An interview with AAEPGDC commissioner provides insights into the major factors that hinder climate-change response in the city.The In Figure 6, we provide a summary of the top ten factors that hinder climate-change response in the city, gathered from the aforementioned interview response.Note that the figure only shows the number one responses, i.e., the most important ones according to the respondents.
To provide additional context to the results presented above, we provide a few of the interview responses as follows.An interview with AAEPGDC commissioner provides insights into the major factors that hinder climate-change response in the city.The commissioner stated that: "The major factors to take mitigation and adaptation action in city is lack of coordination of sectors and finance" [Interview, 10 August 2021].He elaborated that, "Even though, the 2019 redesigning city administration proclamation gives the mandate to manage and control environmental issue to AAEPGDC, the proclamation lacks clarity about regulatory issue.The commission mainstreamed climate change issue in more than 23 sectors and gave training for those sectors; however, the commissioner has no authority to control, evaluate and make accountable the work of these sectors.Hence, the work of the sectors that is related to climate change issue is voluntary type activity.Although this is a very serious problem for climate change governance in the city, the commission's major work has been planting trees.In addition, although the commission tried to establish steering committee led by the mayor, even after more than six months, it is still not functional because of several reasons, including due to the country's security issue".
In an interview with one of the experts of the commission, the expert said that one of the major problems of AAEPGDC, at different levels, is the constant change in leaders and weak leadership style [Interview, 21 July 2021].Even during this study, three leaders were changed at the commission level.He added that due to a lack of legal systems, those leaders imposed their own interests.According to the energy group leader in the commission, leaders focus on short-term goals that are politically motivated and then they ignore the long-term impact of climate change [Interview, 3 July 2021].He added that at the commission level, for example, leaders pay more attention to the natural-resource management directorate than to climate change because the directorate mobilizes money for the commission by selling quarries in millions.On the contrary, climate-change issues need a budget, whereas leaders assume the climate-change agenda is insignificant.Hence, there is a misunderstanding by officials when it comes to climate change and its impact on economic development.
According to an interview held with a commission climate-change-mainstreaming expert regarding employees, in 2020, the city administration recruited a large number of degree holders to decrease unemployment by creating job opportunities and assigning them to different sectors using a quota system [Interview, 25 September 2021].During that time, a large numbers of employees that were recruited, especially at Woreda level, have degrees that are not related to climate change or the environment.The majority of Woreda employees have educational backgrounds in the areas of geology, maths, physics, accounting, management, chemistry, engineering and other similar fields.Hence, a large number of experts lack an educational background related to climate change.In addition, there is a lack of training or capacity building at the sub-city and Woreda levels.During this study's data collection period, most sub-city and Woreda experts did not have knowledge of the Addis Ababa CRGE strategy.One interviewee from Bole sub-city (Woreda 6 expert) told us that the employee got the chance to take training on the Addis Ababa CRGE strategy, whereas almost all other experts did not get the training [Interview, 28 June 2021].
Interview results from the federal urban and infrastructure ministry environmental and climate-change management leader [Interview, 21 July 2021] and the federal EPGDC climate-change directorate director [Interview, 9 July 2021] revealed similar responses regarding the vertical coordination in the city.Both argued that "compared to the regional government, there was weak vertical coordination in Addis Ababa city".They said that the reason is the autonomy granted to the city, and making the assumption that the city has potential and, consequently, there is a lack of interest in obtaining support from the federal level.The federal EPGDC climate-change directorate director added that "when we call all regional and two city administration experts to give training, the Addis Ababa city commission experts did not participate." Another example is the interview we had with the C40 adviser.The view of the advisor is: "As a city advisor in AAEPGDC, during this work, the major problem of climate change governance in the city is lack of political commitment of officials.The level of understanding about climate change is still low."He argued that "the attention of climate change response given by the city administration is very low".He added that "shortage of budget and lack of accountability system is also the major problem of climate change governance" [Interview, 12 June 2021].The C40 advisories also reiterated that climate-change-resilient-strategy implementation has been so problematic because of poorly structured institutions and the weak cooperation of actors.Additionally, the reasons includes a weak understanding of the climate-change impacts of development on the part of policy makers and investors and a lack of action in urban areas compared to rural areas.The interviewee result of the C40 Cities Climate Leadership Group adviser indicated that new climate policy, rules, and regulations are needed to address GHG emissions in different sectors.
According to an interviewee affiliated with the City climate-change mainstreaming leader about climate-change response action [Interview, 6 July 2021], "The major problem of mitigation and adaption action is lack of political willingness of higher officials, weak leadership and lack of accountabilities".She added that, even though the commission established a steering committee led by the mayor, more than six months later the committee is not functional and repeated enquiries had not received any response.
As an example of interviewing people from private sectors, we interviewed some people from a Hujain shoes factory [Interviewee, 3 June 2021] and soufflé malt factory [Interviewee, 23 June 2021].They indicated that the city's administration does not encourage the involvement of private sectors in preparing and implementing the climate-change strategy and plan.They also added that AAEPGDC lacks cooperation with the private sector, especially targeting issues such as, industrial emission reduction and environmentalmanagement-plan preparation, but instead they are quite active in punishing them.
Discussion
Climate-change issues require support and participation not only from environmental offices or departments but also from all city administration sectors and actors.This study shows that involving only the government sectors in environmental offices for climatechange governance is not sufficient to address GHG and minimize vulnerability to climate change in the city.Thus, the result of this research determines the aspects of effective climate-change governance that are not well implemented in the city.It is clear from the study that as coordination; finance; policy, strategies, and regulations; and political willingness, and leadership improves, the effectiveness of climate-change governance will also increase.The above quantitative and qualitative results show that the effectiveness of climate-change governance is being hindered by different factors.
The major factor that hinders the effectiveness of governance in the city is a lack of coordination of actors and sectors.Even though the commission made climate-change issues mainstream across different sectors, such as transport, waste, plan commission, disaster risk management, building, health, and green development, the horizontal collaboration of these sectors in the governance process is weak.Studies found out that GHG emissions and risks in cities are not only municipal-or local-government concerns but they are also challenging for a range of actors across sectors, in creating coordination for effective climate-change governance to mitigate emissions and adaptation to climate risks [29,37,39].Regarding the lack of coordination, the commission itself is not able and not even willing to attract the participation of NGOs and the private sector in the decision-making process.Only the C40 advisor participated actively in the climate-change action in the city.The commission is not particularly working to attract NGOs in the future.However, the federal government has one directorate, named the resource mobilization directorate, that works to attract CSOs.Our finding, in this regard, is supported by a study conducted in two cities of Africa.Studies conducted in the cities of Karonga in Malawi, and Dar es Salaam in Tanzania, also find that climate-change governance is hindered by poor collaboration between the government, private sector and CSOs [104].
Lack of political will is another major problem that hinders the response to climate change in the city.The first important thing for effective climate-change response is the political willingness of leaders in different sectors and levels [38,42].According to the AAEPGDC, the climate-change mainstreaming team leader, the climate-change issue has lacked attention from higher officials, especially from the city administration.The reason is that the steering committee has not been functional for more than 6 months.The commissioner also argued that the steering committee incorporates different sectors and it is the most important committee for climate-change response actions, but it is not starting their work yet as the city administration gives more attention to the country's security issue and COVID-19.Security concerns and the pandemic have adversely impeded climate-change governance in Ethiopia [49].
Lack of strong leadership at the local level is another problem related to climatechange action.Leadership is an important issue because it motivates people to accomplish positive changes in the organization and play an important role in guiding who participate in the decision-making process and what actions they take.To achieve sustainable or long-term development, shaping climate-change response leadership at different levels is critical [29,43].
Weak implementation of policies, strategies and regulations is another factor that hinders the governance process.The 2014 Climate Resilient Green Growth Strategy incorporates many mitigation and adaptation strategies across different sectors.The major constraint is weak enforcement of strategies, laws, regulations, and plan.A similar study conducted in African cities, specifically in Lusaka and Durban, shows that the governance process faced serious capacity problems in executing strategies and plans, especially at the local level [48].Even though the mandate for climate-change issues was given for EPGDC and mainstreaming is carried out across different sectors, the commission has no authority to control the sectors.Hierarchically, these sectors are accountable to the city administration and the municipality; hence, the environmental commission lacks authority over these sectors.
In addition to the weak implementation of policies, strategies and regulations, inadequate laws and legislations are also factors that determine climate-change response [24,28,32].Climate-change issues are poorly understood by city officials and, in most cases, they assume that it is not a critical issue for our country.Hence, there is a limitation in the strategy, regulations, proclamations, and laws to address GHG emission across different sectors, including transport, waste, building, energy and others.Lack of an independent institution, which would be directly responsible for climate-change issues, is a factor that determines the local governance of climate change [17,38].
Inadequate financing for the implementation of plans or programs is another factor that hinders climate-change response.The permanent source of funds is the budget from the upper level government.The fund assigned to the work of the commission is not only for climate-change purposes and is insignificant in the first place.Furthermore, at the same time, there is lack of resource mobilization to obtain financing from different CSOs.In cities of developing countries, a shortage of financing is a major factor that hinders climate-change governance because they need budget for housing, infrastructure provision, job creation and poverty reduction [45,104].A shortage of knowledgeable experts is another problem of climate-change response in the city.Although the number of employees, in particular, is not a problem, experts lack general climate-change knowledge.However, studies showed that experts knowledgeable on climate change are the key source of success for climate-change governance [29,33,37].
Additionally, according to the results shown above, both quantitatively and qualitatively, compared to other variables, lack of access to information is not a major factor.The study shows that information is not a major problem because the city administration holds a GHG-emission inventory every 2 years.Regarding impact and vulnerability, there are several reports that indicate that higher officials as well as the community well understood vulnerable places.Climate-change governance in cities require access to current, contextspecific sources of data, including future climate predictions, GHG inventories results, climate vulnerability assessments, and impacts, such as heat waves and floods [27,34].
Finally, this study shows that, currently, lack of access to technologies is not a major problem of climate-change governance in the city.However, this does not mean that it is not a problem at all and our study shows that, in the city, there is inadequate knowledge about technologies.When it comes to climate planning, studies show that it is necessary to support climate-change response action with different technologies and technical solutions [3,32].
Conclusions
In Addis Ababa city, the practice of climate-change governance is ineffective.It is significantly hindered by different factors.The result of this study reveals that a lack of coordination, political will and leadership are the key problems of governance in the city, followed by inadequate finance and policy, strategy, and regulations.In addition, a shortage of knowledgeable experts and lack of access to information and technologies make their own contributions in the ineffectiveness of climate-change governance.The study also concludes that when the level of coordination, political will and leadership increase, climate-change governance effectiveness shows improvement.Thus, the city administration should emphasise climate change like it does other crosscutting issues and should enable the steering committee by implementing a strong accountability system.In addition, the city administration should try to revise or formulate new policy, strategy or regulations, as well as establish independent institutions for climate-change issues.Specifically, the commission should create an enabling environment to attract non-state actors, and should assign one directorate to mobilise finance, following an approach undertaken by the federal environmental protection commission.The commission should also provide continuous training and capacity building to sub-city and Woreda-level leaders and experts.
Figure 1 .
Figure 1.Map of Addis Ababa city with 11 sub-cities.Figure 1. Map of Addis Ababa city with 11 sub-cities.
Figure 1 .
Figure 1.Map of Addis Ababa city with 11 sub-cities.Figure 1. Map of Addis Ababa city with 11 sub-cities.
Figure 4 .
Figure 4. Flood impact in Addis Ababa city: (a,b) show damage to human life and their proper (c) damage to vehicles, human life and transport infrastructure; (d) flooded streets in the middle the city causing disruption to the transportation system and flooding effects on roads.
Figure 4 .
Figure 4. Flood impact in Addis Ababa city: (a,b) show damage to human life and their property; (c) damage to vehicles, human life and transport infrastructure; (d) flooded streets in the middle of the city causing disruption to the transportation system and flooding effects on roads.
, this development strategy includes climate-change responses of both green development, which is preventing climate change (mitigation), and resilient development, which is responding to the impact of climate change (adaptation).Mitigation has been at Sustainability 2023, 15, 3235 9 of 22
Figure 5 .
Figure 5. (a) Climate-resilient green growth strategy concept in the city and (b) relationship between climate change, impacts and responses.Source: AAEPGDC [99].Adaptation is a key means by which resilience and reduced vulnerability in local communities and economies are built.Adaptation combines risk management, economic activity adjustment, infrastructure modifications, and changes in community needs.A fundamental problem for decision makers is determining priorities and appropriate activities to fit the dynamics of the city and lessen anticipated local climate-change impacts in Addis Ababa city.The key to effective adaptation that shields communities from the effects of climate change is a locally relevant, cogent, and multidisciplinary response strategy that works across government and community.An effective adaptation plan needs to reveal the anticipated local impacts of climate change and to build resilience when dealing with the city's vulnerabilities.Adaptation efforts in the city can offer co-benefits for climate-change mitigation and for local economic development.The climate-change-resilient green growth strategy in Addis Ababa addresses both climate-change adaptation and mitigation issues, as shown in Figure 5a,b [99].Climate-change governance actions started well after the formulation of a strategy.The commission was appointed to oversee issues related to the city's environment and climate change.Starting from 2015, major climate actions undertaken in the city include: tree planting, encouraging community-level adaptation, and the expansion of the light-rail transit network.These actions were taken to minimize climate-change risk and emissions reduction.The city has implemented car-free days to promote walking and cycling.The car-free day is held every month, aiming to make attitudinal change in the long run.Smart parking is also another instrument to improve traffic flow and GHG emissions reduction[65].Currently, climate-change governance is being practised in the city to address both climate-change adaptation and mitigation issues.During the planning process for climate action, the city determined 20 priority adaption measures and 14 mitigation initiatives.The city developed a climate action plan in 2020, which was started in 2017 by the C40 Cities
Figure 6 .
Figure 6.Factors that hinder climate-change governance in the city by professionals.
Figure 6 .
Figure 6.Factors that hinder climate-change governance in the city by professionals.
Table 1 .
Number of respondents (experts) from different levels of AAEPGDC.
Table 5 .
Descriptive statistics results of factors affecting climate-change governance.
Table 6 .
Results of analysis on factors that hinder climate-change governance.
|
2023-02-12T16:03:39.497Z
|
2023-02-10T00:00:00.000
|
{
"year": 2023,
"sha1": "8b03ede7e251bc82886939e5d75e4e8b5f4eb081",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2071-1050/15/4/3235/pdf?version=1676015480",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "6944824f765d402e1d901f4a451998226a6e1482",
"s2fieldsofstudy": [
"Environmental Science",
"Political Science"
],
"extfieldsofstudy": []
}
|
256056412
|
pes2o/s2orc
|
v3-fos-license
|
Cross-Sectional Analysis of Human Papillomavirus Infection and Cytological Abnormalities in Brazilian Women
The aim of this study was to determine the incidence of infections and cytological abnormalities and to investigate possible predisposing factors such as sociodemographic characteristics, sexual behavioral habits, and gynecological and obstetric backgrounds. Between 2013 and December 2016, a cross-sectional study was conducted among 429 consenting women, from whom cervical samples were tested for the presence of Human papillomavirus (HPV) by polymerase chain reaction (PCR). Susceptibility to HPV infection was assessed by binary logistic regression in light of possible predisposing factors, which were collected using a questionnaire. In our sample population, the prevalence of HPV infection was 49%; high-risk types had a higher prevalence of 89.1%. A larger proportion of HPV-infected women were under 25 years of age, were single, and had monthly incomes up to minimum wage. Multivariate binary logistic regression analysis showed that age younger than 25 years increased the odds of infection fivefold, while a monthly income of one to three minimum wages provided protection against HPV infection, even if the women were married or had a cohabiting partner. In the HPV-positive group, squamous intraepithelial lesions (SIL) occurred more frequently in women who earned up to one minimum wage monthly, but a monthly income of one to three minimum wages protected against the development of SIL. The results suggest that age, marital status, and monthly income are important cofactors for HPV infection and the development of SIL.
Introduction
Cervical cancer (CC) is the third most common cancer in women worldwide, with an estimated 570,000 new cases in 2018, of which more than 85% occurred in less developed regions [1]. In Brazil, cervical cancer is also the third most common cancer in women, with more than 16,500 new cases annually [2]. Human papillomavirus (HPV) infection plays a central role in the development of cervical cancer and can reach a prevalence of 99.7% in cervical cancer samples [3]. It has been clearly demonstrated that high-risk HPV infection is necessary to promote progressive cell transformation leading to squamous intraepithelial lesions (SILs) and cervical cancer [4]. Although many women develop HPV infections of the cervix, studies support the interpretation that most HPV infections are only transiently detectable and do not lead to dysplasia or CC [5]. The natural history of HPV infection is not clear, but several cofactors are thought to promote the development and progression of SIL and CC. Viral factors such as HPV genotype, viral load, and coinfection with many HPV types are associated with abnormal cytologies in women [6]. In addition, sexual behavior and environmental factors, including hormonal contraceptives, tobacco smoking, parity, and coinfection with other sexually transmitted pathogens, especially Chlamydia trachomatis (CT), are associated with disease progression [2,7].
The HPV prevalence in cervical specimens in Brazil determined in a systematic review and meta-analysis study was 25.41% with a prediction interval of 7.17% to 60.04% depending on the population and geographical area studied [8]. Epidemiological data on the prevalence of HPV in the state of Paraná are still scarce. A large study was conducted in the city of Paiçandu, in northwestern Paraná, and the overall prevalence of HPV deoxyribonucleic acid (DNA) found in this area was lower than the levels found in studies conducted in other Brazilian regions that also used polymerase chain reaction (PCR) [9]. A recent study conducted in Maringá, another city in northwestern Paraná, found an HPV prevalence of 33.8% [10]. However, it is necessary to extend this study to other cities in the region, since there is a great need for medical care in the North Paraná region.
The aim of this cross-sectional study was to determine the incidence of HPV infection and cytological abnormalities and to investigate possible predisposing factors such as sociodemographic characteristics, sexual behavioral habits, and gynecological and obstetric history.
Study Design and Sample Collection
The women participating in the study were from the city of Londrina and surrounding small towns. This large city is located in the northern region of the Brazilian state of Paraná (southern region of the country) and is more than 300 km from the respective capital (Curitiba).
Cervical cell samples were randomly collected between 2013 and December 2016 from 429 women who presented for outpatient appointments at the colposcopy outpatient clinic of the Intermunicipal Health Consortium of Middle Paranapanema and the College Hospital and Clinical Center of the College of Londrina, and who had undergone cervical cancer screening cytology at two primary healthcare facilities (Municipal Health Centers Dr. Justiniano Clímaco da Silva and Dr. Paulo Roberto Moita da Silva). The women included in the present study were participating in cervical cancer prevention programs, since, according to the Brazilian guidelines for cervical cancer screening, routine cytological examination is recommended for women aged 25 to 64 years who are sexually active [11]. After sample collection for cytology, cytobrushes were stored in 2 mL of ethylenediaminetetraacetic acid (EDTA)-Tris (TE) buffer (10 mM Tris hydrochloride (HCl), 1 mM EDTA) pH 8.0) at −20 • C until analysis. Peripheral blood was collected in sterile syringes containing EDTA and stored at −20 • C until analysis. Patients were interviewed using a structured questionnaire to collect sociodemographic data, sexual behavior data, and gynecologic and obstetric data. The women who participated in this study had not been vaccinated at the time the samples were collected. The HPV vaccine has only been distributed by the Unified Brazilian Health System since March 2014. Distribution of the vaccine was limited to girls aged 11-14 years [12].
Cervical Cytology
Cervical cancer smears obtained during screening in healthcare units were sent to the public health laboratory. They were evaluated and reported according to the diagnostic criteria of the Bethesda system (2014). SIL was defined as low-grade squamous intraepithelial lesions (LSIL), high-grade squamous intraepithelial lesions (HSILs), atypical squamous cells of undetermined significance (ASC-US), HSILs that could not be excluded (ASC-H), or cervical carcinoma (CC), while controls were negative for intraepithelial lesions or malignancy and all SIL types were excluded [13].
DNA Extraction
Genomic DNA from cervical cells was extracted from Cytobrush samples using DNAzol (Invitrogen Inc., Carlsbad, CA, USA) according to the manufacturer's instructions. DNA concentrations were measured at 260 nm using the NanoDrop 2000c™ spectrophotometer (Thermo Fisher Scientific, Waltham, MA, USA), and purity was assessed by the 260 nm/280 nm ratio. DNA samples were then stored at −20 • C.
HPV Detection by Polymerase Chain Reaction (PCR)
HPV was detected by PCR using primers MY09 (5 -CGTCCMAARGGAWACTGATC-3 ) and MY11 (5 -GCMCAGGGWCATAAYAATGG-3 ), which amplify a conserved region of approximately 450 base pairs (bp) of the L1 HPV gene [14]. Reaction conditions were 190 nM dNTPs, 500 nM of each primer, 2 mM MgCl2, 1× buffer, approximately 80 ng of DNA, and 1.25 U of Taq polymerase (InvitrogenTM, Carlsbad, CA, USA), with an annealing temperature of 55 • C. This method was chosen because it targets very small fragments and is, therefore, more sensitive than several other molecular techniques. Co-amplification of the human b-globin gene (268 bp) was performed as an internal control using primers GH20 (5 -GAAGAGCCAAGGACAGGTAC-3 ) and PC04 (5 -CAACTTCATCCACGTTCACC-3 ) under the same conditions as HPV PCR. A negative control (no DNA) was also added to all reactions to ensure that no contamination was present, as well as a positive control consisting of a cervical adenocarcinoma cell line containing an integrated HPV18 genome (HeLa). The PCR product was analyzed by electrophoresis in a 10% polyacrylamide gel stained with silver nitrate.
PCR Detection of Chlamydia Trachomatis (CT)
PCR assay for detection of CT was performed using primers specific for the gyrA gene: forward primer C2 (5 -TGATGCTAGGGACGGATTAAAACC-3 ) and reverse primer C5 (5 -TTCCCCTAAATTATGCGGTGGAA-3 ), as described previously [15]. Co-amplification of the human b-globin gene was also performed as an internal control. For each set of tests, a DNA pool extracted from cervical cells infected with CT was used as a positive control, and "no DNA" was used as a negative amplification control. The 463 bp amplicons were analyzed by electrophoresis in a 10% polyacrylamide gel stained with silver nitrate.
Amplicons Sequencing
To confirm primer specificity, some amplicons of both HPV and CT DNA were purified using the PureLinkTM PCR Purification Kit (Invitrogen) according to the manufacturer's instructions. The sequencing reaction was performed using the BigDye ® Terminator v3.1 Cycle Sequencing Kit (Applied Biosystems ® , Foster City, CA, USA), 50 ng of DNA template, and 5 µM primer (forward or reverse) in a final volume of 10 µL. PCR conditions were as follows: 10 s at 95 • C, 30 cycles of 20 s at 95 • C, 20 s at 50 • C, and 60 s at 60 • C. Amplicons were sequenced in a 24-capillary 3500xl Genetic Analyzer (Applied Biosystems, Life Technologies, Thermo Fischer Scientifc, Foster City, CA, USA ® ). Percentage identity was determined using the program BLAST by comparing the DNA sequences of the amplicons to known HPV L1 or gyrA nucleotide sequences in the GenBank databases.
Statistical Analysis
Associations between categorical variables were analyzed with the chi-square test (χ 2 ) or Fisher's exact probability test, where appropriate, and expressed in absolute numbers (n) and percentages (%). Differences between categories of the same variables were assessed using the Mann-Whitney test and expressed as the median and interquartile range (IQR) 25-75%. The odds ratio (OR) and 95% confidence interval (95% CI) were determined. Binary logistic regression analysis was used to determine the significant predictors of HPV infection compared with controls (uninfected), and multinomial logistic regression was used to analyze the risk factors for the presence of SIL compared with controls (normal cervical cytology) in HPV-infected women. In both analyses, sociodemographic and sexual behavioral, gynecologic, and obstetric factors associated with HPV infection or a SIL diagnosis in the bivariate analysis were each included as explanatory variables. All tests were two-sided, and a significance level of α = 0.05 was assumed. Analyses were performed using IBM SPSS Statistics 22.0 software (SPSS Inc. 2013. Chicago, IL, USA).
Results
Some samples were sequenced to confirm primer specificity. The identity percentage was determined using the program BLAST, and the sequences obtained for HPV and CT demonstrated 100% identity with the L1 sequence, AJ617545.1, and the GyrA CT subunit, JN795357.1, respectively.
The sociodemographic characteristics of the HPV-negative and -positive women are shown in Table 1. There were no significant differences between these groups in selfreported ethnicity (p = 0.18), schooling (p = 0.83), knowledge of HPV (p = 0.50), and viral transmission (p = 0.63). However, HPV-infected women had a lower mean age, 33 years, than uninfected women, 43 (32-52) years (p < 0.001). To determine in which age range HPV infection was more common, the continuous variable "age" was divided into "age ranges", as shown in Table 1. Thus, compared with the control group, a greater proportion of HPV-infected women were younger than 25 years (p < 0.001), were single (p < 0.001), had a reported monthly income of up to minimum wage (p = 0.018), and were smokers (p = 0.014). Table 2 shows the analysis of HPV-negative and -positive groups according to sexual behavior and gynecologic and obstetric characteristics. Age at menarche (p = 0.21), number of sexual partners in the past 6 months (p = 0.051), oral contraceptive use (p = 0.13), condom use (p = 0.44), spontaneous abortion (p = 0.27), and CT infection (p = 0.056) were not statistically different between the HPV-positive group and the control group. However, HPV infection was associated with first sexual intercourse before age 18 (p = 0.012), at least four lifetime sexual partners (p < 0.001), and pregnancy that did not occur (p = 0.008). Analysis by two-sided chi-square (χ 2 ) test or by Fisher's exact test when appropriate with p < 0.05 set as significance level; data expressed as absolute number and percentage (%). * p < 0.05; ** p < 0.01; *** p < 0.001. a For sexual behavioral and gynecological and obstetric characteristics analysis between HPV noninfected and infected patients, not all 429 patients were included, with variations depending on the characteristic analyzed.
The occurrence of CT infection was more common in women who tested positive for HPV infection. Although this difference was not statistically significant, the p = 0.056 value may indicate a tendency toward a higher CT prevalence in HPV-infected women.
Considering the number of existing cases of the analyzed conditioning variables in relation to the total population sample analyzed in this study, the characterization of the sample in terms of prevalence of HPV infection was 49%, with a higher prevalence of these high-risk types (89.1%) compared with low-risk types (8.0%) and undetermined risk (2.9%). The prevalence of infection with C. trachomatis was 4.7%, and the proportion of coinfections with HPV and C. trachomatis in cervical lesions was 7.2% and in healthy individuals was 2.2% in the population sample studied. The presence of cervical lesions was 25.2%. When cervical lesions were stratified according to the degree of involvement, a prevalence of 7.3% for LSIL, 17.7% for HSIL, and 1.2% for cancer could be observed.
To test whether these significant variables (age, marital status, monthly income, smoking status, age at first sexual intercourse, number of lifetime sexual partners, and number of pregnancies) were independently associated with HPV infection, a binary logistic regression analysis was performed with a control group as the reference category. A direct association was found for age less than 25 years, which increased the odds of acquiring the virus approximately fivefold (OR = 4.92; 95% CI = 1.67-14.52; p = 0.004), whereas both married and lifetime partners (OR = 0.45; 95% CI = 0.23-0.88; p = 0.020) and a monthly income of one to three minimum wages (OR = 0.59; 95% CI = 0.36-0.95; p = 0.030) provided protection against HPV infection (Table 3). After verifying which variables were directly and independently associated with HPV infection, the HPV-positive group was categorized into normal cytology, LSIL presence, and HSIL presence according to cytological results to analyze the influence of sociodemographic and sexual behaviors, as well as gynecological and obstetric characteristics, on the development of SIL. Of all independent variables studied, the presence of HSIL was associated with age <25 years (p = 0.002), oral contraceptive use (p = 0.010), and spontaneous abortion (p = 0.010) (data not shown). The presence of LSIL was associated with <25 years (p = 0.002), a monthly income of up to minimum wage (p < 0.001), and more than four lifetime sexual partners (p = 0.019). In multinomial logistic regression analysis with normal cytology as the reference category, all age categories >25 years and spontaneous abortion (OR = 4.84; 95% CI = 1.72-13.60; p = 0.003) were independently associated with the risk of developing LSIL (Table 4). In the analysis of HSIL, only >4 lifetime sexual partners (OR = 3.41; 95% CI = 1.35-8.61; p = 0.009) was an independent risk factor (Table 4).
Discussion
In this cross-sectional study, the incidence of HPV infection by PCR and predisposing factors for infection and cytological abnormalities were determined in women who received care in the public health system (SUS) in Londrina, Paraná State, Brazil. To our knowledge, this is the first epidemiological and molecular report to investigate HPV prevalence in this region.
This study found a prevalence of 49.0% HPV infection in the women studied, of which 89.1% were high-risk types, 8.0% were low-risk types, and 2.9% were of undetermined risk (0.9%). However, we were unable to perform it in all our sampling, configuring a limitation of our study. HPV infection was associated with four sociodemographic characteristics (age, marital status, smoking status, and monthly income), two sexual behavior variables (age at first sexual intercourse and lifetime sexual partner), and one gynecologic and obstetric aspect (parity).
Only those younger than 25 years, marital status with a married or civil partner, and monthly income of one to three minimum wages (Table 1) were independently associated with HPV infection in a multivariate model.
The association between young age and HPV infection is well established in the literature as an independent factor, showing that HPV prevalence is age-specific worldwide for both low-risk types (LR-HPV) and high-risk types (HR-HPV) [16]. Teenagers and young women are more sexually active and more susceptible to pathogen infections because they have a particular cervical anatomy that manifests cervical ectopy in addition to early maturation stages, making them vulnerable to both trauma and infection, especially in the developing transformation zone [17].
Single women were significantly more likely to have HPV in the HPV group (Table 1), which is consistent with the findings of Foliaki et al. [18]. In addition, a significantly higher proportion of married women was observed in the control group than in the HPV-infected group (Table 1), presumably because they have a steady partner [19].
It is also known that multiparity, which becomes more likely in women who start having sex earlier, is associated with a higher risk of exposure of women to HPV infection and other cofactors [7,20].
Tobacco smoking was also associated with HPV infection in this study (Table 1). Several compounds from cigarette smoke such as nicotine (and its major metabolite cotinine) and carcinogenic tobacco-specific N-nitrosamines were detected in cervical mucus, highlighting the synergistic effect between cigarette smoking and HPV infection [21]. Another tobacco-related carcinogen, benzo[a]pyrene (BaP), can interact with HPV, modulating the life cycle of the virus and promoting its synthesis [22]. Tobacco smoking may also reduce the density of Langerhans cells, affecting local immune surveillance in the cervix [23].
Women who reported a monthly income at or below the minimum wage were more likely to be infected with HPV (Table 1). These results make sense considering that a monthly income of less than a Brazilian minimum wage (i.e., low socioeconomic status) is strongly associated with HPV infection in studies in the Brazilian northeast and abroad [24]. In fact, sociodemographic data show the social inequalities associated with the high risk of HPV infection leading to cervical cancer, as the virus is more prevalent in public health facilities than in private clinics [25]. In this context, the lack of knowledge about HPV, as well as its prevention and transmission, is a factor that should be considered in women with less education [26].
We found an independent association between monthly income of one to three minimum wages and protection from SIL among women infected with HPV (Table 3). These results suggest that a higher monthly income of women denotes that they are less susceptible to developing cervical lesions. This information confirms the observation that (poor) economically disadvantaged women and girls in many parts of the world are more vulnerable to sexually transmitted diseases due to limited access to economic and educational resources and to prevention information and tools [27].
Regarding sexual behavior and gynecologic and obstetric aspects, an association was found between HPV infection and first sexual intercourse before the age of 18 years and at least four sexual partners during life ( Table 2). As mentioned earlier, a physiologically ectopic and immature genital tract may explain the predisposition to HPV infection in young women. In addition, a high number of sexual partners is an important risk factor for acquiring HPV [20]. We also observed an inverse association for one pregnancy with HPV infection (Table 2), but not for more than two pregnancies. High parity is consistently associated with susceptibility to HPV infection, and hormonal, traumatic, and immunologic mechanisms are thought to play a role in this association [28].
Our study showed a tendency for association between HPV infection and the intracellular bacterium Chlamydia trachomatis (CT) ( Table 2). This tendency was also reported by Nonato et al. (2016) [29]. Limitations related to the small number of women with HPV/CT coinfection may have contributed to these results in both studies. In this context, the association between CT genital infections and HPV has been most thoroughly studied in the development of CC. CT Infection facilitates infection and reinfection with multiple HPV types, allows viral persistence, and increases the risk of developing CC in coinfection cases [30]. In fact, evidence suggests that CT and HPV share the same transmission route and risk factors [30].
Having more than four sexual partners over a lifetime was associated with the development of HSIL, and spontaneous abortion was associated with the development of LSIL, which is consistent with other meta-analysis of epidemiological studies demonstrating the involvement of these factors from HPV infection to tumor occurrence [31,32].
Cervical cancer screening is effectively performed in Brazil through the Pap smear, a method that has led to a decrease in CC incidence in the country over the last five decades. Nevertheless, new cases of CC are detected every year, and mortality from this cancer is still alarming worldwide [33]. It is not difficult to explain that this failure of screening is due to the subjective and poorly reproducible nature of cervical cytology. The Pap test has limited sensitivity, requires regular repetition to achieve the desired efficacy, and may suffer from interobserver variability [9].
Although this was a cross-sectional study that did not allow us to draw conclusions about causal relationships, we would like to use our cohort and several reports published in recent years to draw attention to the fact that diagnosis by HPV DNA using polymerase chain reaction in combination with cytologic analysis allows more accurate detection of HPV and lesions, even when abnormalities are not detected in the Pap smear.
Conclusions
In this article, we described the HPV occurrence in the northern region of the Brazilian state of Paraná and important cofactors for HPV infection and the development of SIL in our population-based study. In this sense, we hope to contribute to a better characterization of HPV epidemiology and to the implementation of public health policies in Brazil. Institutional Review Board Statement: The present study was approved by the Ethics Committee Involving Humans of the local Institutional Review Board (IRB) (CAAE 05505912.0.0000.5231). The purpose and procedures of the study were explained to all participants, and written informed consent was obtained prior to sample collection and interview.
Informed Consent Statement:
The purpose and procedures of the study were explained to all participants, and written informed consent was obtained prior to sample collection and interview. Data Availability Statement: Not applicable.
Conflicts of Interest:
The authors declare no conflict of interest.
|
2023-01-22T06:16:09.376Z
|
2023-01-01T00:00:00.000
|
{
"year": 2023,
"sha1": "08a88031d98ade5093733cd85cccf65c3f30fd08",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2076-0817/12/1/148/pdf?version=1673855551",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "da1d953db7dced338e0faa94238fd0f59f1ff9be",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
582551
|
pes2o/s2orc
|
v3-fos-license
|
Biotechnology approach to determination of genetic and epigenetic control in cells
A series of studies aimed at developing methods and systems for analyzing epigenetic information in cells are presented. The role of the epigenetic information of cells, which is complementary to their genetic information, was inferred by comparing the predictions of genetic information with the cell behaviour observed under conditions chosen to reveal adaptation processes and community effects. Analysis of epigenetic information was developed starting from the twin complementary viewpoints of cells regulation as an 'algebraic' system (emphasis on the temporal aspect) and as a 'geometric' system (emphasis on the spatial aspect). The knowlege acquired from this study will lead to the use of cells for fully controlled practical applications like cell-based drug screening and the regeneration of organs.
General background
Knowledge about living organisms increased dramatically during the 20th century and has produced the modern disciplines of genomics and proteomics. Despite these advances, however, there remains the great challenge of learning how the different living components of the cell are integrated and regulated. As we move into the postgenomic period, the complementarity of genomics and proteomics will become apparent and the connections between them will be exploited. However, neither genomics nor proteomics alone can provide the knowledge needed to interconnect the molecular events in living cells. The cells in a group are individual entities, and differences arise even among cells with identical genetic information that have grown under the same conditions. These cells respond to perturbations differently. Why and how do these differences arise? Cells are the minimum units containing both genetic and epigenetic information which are used in response to environmental conditions such as interactions between neighbouring cells and of changes in extracellular conditions. To understand the rules underlying the possible differences occurring in cells, we need to develop methods for simultaneously evaluating both the genetic information and the epigenetic information (Fig. 1). In other words, if we are to understand adaptation processes, community effects, and the meaning of network patterns of cells, we need to analyze the epigenetic information in cells. Thus we have started a project focusing on developing a system that can be used to evaluate the epigenetic information of cells by observing specific cells and their interactions continuously under controlled conditions. The importance of the understanding of epigenetic information will become apparent in cell-based biological and medical fields like cell-based drug screening and the regeneration of organs from stem cells, fields in which phenomena cannot be interpreted without taking epigenetic factors into account.
In 1999 the author moved to the Univ. of Tokyo and began his research on the "determination of genetic and epigenetic cellular control processes". To understand the meaning of the genetic variability and the epigenetic correlation of cells, we have developed the on-chip singlecell-based microcultivation method. As shown in Fig. 2, the strategy consists of a three step process. First we purify cells from tissue individually in a nondestructive manner.
Aim of the project
The aim of our project is to develop methods and systems for analyzing the epigenetic information in cells. The project is based on the idea that, although genetic information makes a network of biochemical reactions, the history of the network as a parallel-processing recurrent network was ultimately determined by the environmental conditions of cells, which we call epigenetic information. As described above, if we are to understand the events in living systems at the cellular level, we need to keep in mind that epigenetic information is complementary to genetic information.
The advantage of this approach is that it bypasses the complexity of underlying physicochemical reactions which are not always completely understood and for which most of the necessary variables cannot be measured. Moreover, this approach shifts the view of cell regulatory processes from the basic chemical ground to the paradigm of a cell as an information-processing unit working as an intelligent machine capable of adaptation to changing environmental and internal conditions. It is an alternative representation of the cell and can bring new insight into cellular processes. Moreover, models derived from such a viewpoint can directly help in the more traditional biochemical and molecular biological analyses of cell control.
The basic part of the project is the development of on-chip single-cell-based cultivation and analysis systems for monitoring the dynamic processes in the cell. In addition we have employed these systems to examine a number of other processes eg; the variability of cells having the same genetic information, the inheritance of non-genetic information between adjacent generations of cells, the cellular adaptation processes caused by environmental change, the community effect of cells and network pattern formation in cell groups (Figs. 3 and 4). After making extensive experimental observations, we can understand the meaning of epigenetic information in the modeling of more complex signaling cascades. This field has been largely monopolized by physico-chemical models, which provide a good standard for the comparison, evaluation, and development of our approach. The ultimate aim of our project is to provide a comprehensive understanding of living systems as the products of both genetic information and epigenetic information.
3-1. Single-cell cultivation chip system [2-10]
To understand the variability of cells having the same genetic information and to observe the adaptation processes of cells, we need to compare the sister cells or the direct descendant cells directly (Fig. 3). For that purpose, we have developed the system for an on-chip single-cell cultivation chip. The system enables excess cells to be transferred from the analysis chamber to the waste chamber through a narrow channel and allows a particular cell to be selected from the cells in the microfabricated cultivation chamber by using a kind of non-contact force, optical tweezers (Fig. 5). Figure 6 depicts our entire system for the on-chip single-cell microculture chip. The system consists of a microchamber array plate, a cover chamber, a phase-contrast/fluorescent microscope and optical tweezers. The cover chamber is a glass cube filled with a buffer medium and is attached to the array plate so that the medium in the microchambers can be exchanged through a semipermeable membrane.
Using the system, we examined whether the direct descendants of an isolated single cell could be observed under the same isolation conditions. Figure 7 tions of isolated E. coli cells derived from a common ancestor. The four series of interdivision times varied around the overall mean value, 52 min (dashed line); the mean values of the four cell lines a, b, c, and d were 54, 51, 56 and 56 min, showing differences rather small compared with the large variations in the interdivision times of consecutive generations. This result supports the idea that interdivision time variations from generation to generation are dominated by fluctuations around the mean value, and it was evidence of a stabilized phenotype that was subsequently inherited. To explore this idea, we examined the dependence of interdivision time on the interdivision time of the previous generation. We grouped the interdivision time data into four categories and determined their distributions (Fig. 7(b)). Comparison of these distributions showed that they were astonishingly similar to one other, suggesting that there was no dependence on the previous generation. That is, there was no inheritance in interdivision time from one generation to the next.
3-2. On-chip agarose microchamber system [11-14]
One approach to study network patterns (or cell-cell interactions) and the community effect of cells is to create a fully controlled network by using cells on the chip (Fig. 4). We have therefore developed a system consisting of an agar-microchamber (AMC) array chip, a cultivation dish with a nutrient-buffer-changing apparatus, a permeable cultivation container, and a phase-contrast/fluorescent optical microscope with a 1064-nm Nd:YAG focused laser irradiation apparatus for photothermal spot heating (Fig. 8). The most important advantage of this system is that we can change the microstructures in the agar layer even during cultivation, which is impossible when using conven-Single-cell cultivation in microchambers for measuring the variability of genetic information Figure 5 Single-cell cultivation in microchambers for measuring the variability of genetic information.
tional Si/glass-based microfabrication techniques and microprinting methods.
As explained above, the agarose-microchamber cell-cultivation system includes an apparatus for photothermal etching. Photothermal etching is an area-specific melting of the agarose microchambers by spot heating using a focused laser beam and a thin layer made of a lightabsorbing material such as chromium (since agarose itself has little absorbance at 1064-nm). We made the threedimensional structure of the agar microchambers by using a photo-thermal etching module. Figure 9 is a top-view micrograph of the agar microchambers connected by small channels. The space on the chip was colored by filling the microchambers with a fluorescent dye solution. Also shown are cross-sectional views of the A-A and B-B sections, in which we can easily see narrow tunnels under the thick agar layer in the A-A section and round tunnels in the B-B section. These cross-sectional micrographs show that we can make narrow tunnels in the agar layer by photothermal etching. The left micrograph in Fig. 9 is a top view of the whole microchamber array connected by narrow tunnels.
By using this photothermal etching method, we can change the neural network pattern on a multi-electrode array chip during cultivation. Figure 10 The agarose microchamber system can also be used to observe the dynamics of the synchronizing process of two isolated rat cardiac myocytes. Figure 11 shows an example of the synchronizing process of two cardiac myocytes. After the cultivation had begun, the two cells elongated and made physical contact within 24 hours, followed by System for on-chip single-cell microculture chip Figure 6 System for on-chip single-cell microculture chip.
synchronization. It should be noted that, as shown in the graph, the synchronization process involved one of the cells following the rhythm of the other, and that the 'copy cat' cell stops beating prior to acquiring the new beat rhythm.
Conclusions
We have newly developed and have just started to use a series of methods for understanding the meaning of genetic information and epigenetic information in a simple cell model system. The most important expected contribution of this project is to reconstruct the concept of a cell regulatory network from the 'local' (molecules expressed at certain times and places) to the 'global' (the cell as a viable, functioning system). Knowledge of epigenetic information, which we can control and change during their life, is complementary to genetic information, and those two kinds of information are indispensable for living organisms. This new kind of knowlege has the potential to be the basis of a new field of science.
Authors' contributions
KY conceived of the study, its design and coordination. On-chip agarose microchamber system Figure 8 On-chip agarose microchamber system.
Genetic variability of direct descendant cells of E. coli
Three-dimensional structure of agarose microstructures Figure 9 Three-dimensional structure of agarose microstructures.
Stepwise formation of neuronal network of rat hippocampal cells Figure 10 Stepwise formation of neuronal network of rat hippocampal cells.
Dynamics of the synchronizing process of two isolated rat cardiac myocytes Figure 11 Dynamics of the synchronizing process of two isolated rat cardiac myocytes.
|
2018-04-03T03:06:25.888Z
|
2004-11-22T00:00:00.000
|
{
"year": 2004,
"sha1": "f8e544e486a70085bd4f1e15cd0ea1093a97cdcc",
"oa_license": "CCBY",
"oa_url": "https://jnanobiotechnology.biomedcentral.com/track/pdf/10.1186/1477-3155-2-11",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "8812436725ecd88c8a07bc8f731b2c9ddd90da38",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
55613973
|
pes2o/s2orc
|
v3-fos-license
|
Bioethical conflicts: in physiotherapy home care for terminal patients
The bioethical debate gives rise to considerations that foster understanding of death and terminal illness, in order to ensure compliance with principles such as respect for autonomy, beneficence, not maleficence, and human rights. The objective of the study was to analyze bioethical conflicts related to physiotherapy home care for terminal patients. This is a qualitative descriptive study. Ten physiotherapists from the Federal District, Brazil, participated, answering a semi-structured interview. Two categories were identified: “challenges of home care for patients with terminal conditions”; and “polarization of physiotherapists between technicality and humanism”. The study reveals potential bioethical conflicts in the care of these patients and their families, in which the limits for the use of therapeutic resources translate into opposite approaches – either attachment or detachment – and the challenge of promoting care guided by humanization and human dignity.
What makes the question of terminality, in the context of palliative care, so evident in the world of bioethical discussions?Certainly because conflicting dimensions arise that pervade the preservation of life at all costs and deal with death, as well as the promotion of human dignity.They are complex topics, loaded with values and moral judgments, which professionals can face difficulties in dealing with due to the relationship of care with the person who is out of therapeutic possibilities.
There are advances in medicine and biotechnology, new mechanisms of life extension, along with the expectations of families, and the legal and cultural aspects of each social group involved.The bioethical debate has allowed important reflections to understand the phenomenon of death in order to ensure the observance of principles based on respect for autonomy, based on the practice of beneficence and non-maleficence, and on human rights, contributing to the humanization of health care 1 .
In this sense, according to Siqueira, Zoboli and Kipper 2 , the end of life became one of the convergent poles of the ethical challenges of the contemporary world.For Singer 3 , the advancement of medical technology has forced us to think about issues we had not faced previously.These issues can perpetuate a life without cure, as in the case of patients -mainly elderly patients with oncological diseases, dementia, severe neurological sequelae -undergoing high-tech, often unnecessary, invasive procedures.In these cases, the suffering of the person kept in bed, controlled by artificial respiration, with body scabs and strong pains, is disregarded, thereby damaging the quality of life of patients and their relatives.This form of treatment is based on the ethics of healing, to the detriment of the ethics of attention to the person, which is based on the centrality of care and human dignity 4 .
Palliative care has as ethical principles the understanding of death as a natural process, respect for life and human dignity, which are important premises for the work of health professionals 4 .However, the literature researched highlights the difficulty of professionals in different health areas to care for and promote the dignity of patients with no possibility of cure and at the end of their life.According to Pessini and Bertachini 5 , the World Health Organization has among its premises the work in a multidisciplinary palliative care team, including, as one of its members, the physiotherapy professional.
The work of the physiotherapist is fundamental in the entire health-disease process, as it contributes to the promotion of health, treatment, rehabilitation and prevention of further harm, as well as palliative care, with an emphasis on quality of life, an important precept incorporated into the new Code of Ethics and Deontology of Physical Therapy (Código de Ética e Deontologia da Fisioterapia) 6 .To perform their work, physiotherapy practitioners use manual techniques, mechanotherapy, and thermodigital electrotherapy resources and, as a general practitioner, work in the most diverse areas of health: respiratory 7 , neurology 8 , orthopedic trauma 9 , gynecology 10 .For example, in the case of patients in a severe state, confined to bed and submitted to artificial respiration, the physiotherapist usually monitors the parameters of mechanical ventilation and performs procedures aimed at maintenance and/or the patient's quality of life 6,11 .
Physiotherapists who work with palliative care also use resources to relieve pain.For this type of work, the professional will have available some therapeutic procedures that may reduce the pain and suffering of the patient and help in its management.The professional is also responsible for the initial evaluation to identify the physical and psychosocial needs, as well as aspects of the environment where the patient is situated.However, before beginning any procedure, physiotherapists should inquire about the patient's desire -if he / she is able to choose and make decisions -to receive physiotherapeutic treatment.Failure to comply with the patient's consent regarding the procedures to be performed may result in bioethical conflict, violating respect for autonomy 12 .
It is worth emphasizing the fact that the subject of palliative care and the end of life is still little discussed in the academic training of physiotherapy students, an area that demands the future professional contributes to the psychological wellbeing to deal with the pain, suffering and expectations of the person and family members regarding the physiotherapeutic treatment 13 .It is important that professionals know the limits of their abilities, in order not to generate unrealistic expectations and frustrations, once, as Kovács points out, there is no solution to death, but the possibility to help die well and with dignity 14 .
For Badaró and Guilhem 15 , the placing of physiotherapists into bioethical scenarios, such as the end of life, is still very incipient, and it is fundamental that professional education and training aim to prepare them to deal with these conflicts, by means of bioethics.It is necessary to take into account the dignity of human beings, their corporeality, the quality and sacredness of life, the benefits and potential harm caused by Bioethical conflicts: in physiotherapy home care for terminal patients http://dx.doi.org/10.1590/1983-80422017251176physiotherapeutic treatments, but without failing to consider the vulnerability and personal integrity of each patient.It is understood that bioethics, through the understanding and application of its principles, could strengthen decision making regarding the therapeutic activity of physiotherapy professionals, providing them with theoretical and practical support for the best approach to benefit the patient.
For Marques, Oliveira and Marães 16 , it is important that physiotherapists study the phenomenon of death, revealing the main conflicts existing in the therapeutic practice with patients in the context of death, especially those of a bioethical nature.However, the authors call attention to the importance of adopting an academic approach that is more focused on the comprehensive training of professionals -and not on technicists -, that is, on persons who, besides performing good technique, can adopt new postures regarding issues related to pain, suffering and the finitude of life, as Pessini and Bertachini 17 point out.For a humanized work in the field of health, Ferreira 18 points out that it is important to create a new culture in health and humanize the therapeutic process through professional education based on the improvement of working relationships in teams, having as a reference the respect for the human dignity.
The objective of this study was to identify and analyze bioethical conflicts in the work of physiotherapists providing home care for terminal patients.For this research, bioethics has become a tool for reflection in the face of biotechnological advances, because through these, life starts to be monitored and also prolonged 14 .In this sense, considering a scenario in which the aim is to increase survival, especially of terminal patients, we should also think of the humanization of care, since, according to Masiá, we are becoming more aware of the need to humanize the process of dying 19 .Thus, issues related to technicalism versus humanism, identified in the participants' reports and that can result in bioethical conflicts -regarding respect for autonomy, beneficence, non-maleficence and human dignity -, were also discussed in this study.
Methodology
This is a qualitative and descriptive-exploratory study.Participants were ten physiotherapists from the Federal District, selected for convenience, who accepted the invitation published on the website of the Regional Council of Physiotherapy and Occupational Therapy of the 11 th Region (Conselho Regional de Fisioterapia e Terapia Ocupacional da 11ª Região -Crefito 11) and also by the Union of Physiotherapists of the Federal District (Sindicato dos Fisioterapeutas do Distrito Federal -Sindfisio).
Through the contact of the researcher responsible for the study, the physiotherapists who met the following inclusion criteria were selected: 1) to provide home care for terminal patients; and 2) to have been working with these patients for at least six months.After this selection stage, the physiotherapists were contacted, and a convenient day, time and place was scheduled with each professional, and the researcher (first author of this article) went to see them.At the time, the participants agreed and signed the free and informed consent form, according to the resolution of the Brazilian National Health Council in force at the time: Resolution CNS 196/1996 (Resolução CNS 196/1996) 20 .The participants were guaranteed confidentiality and anonymity, as well as assured the right to refuse to continue participation at/in the study, if they so wished, voluntarily and free of charge.
In depth interviews for data collection were carried out from May to July 2010, through a semi structured script with questions to characterize the sociodemographic profile, and four open questions about the practices of the physiotherapists involved in home care for terminal patients.In the interviews, participants were given fictitious names to preserve confidentiality.Of the ten professionals, eight were interviewed at the workplace in a reserved environment, and two were interviewed in their homes, without interruptions during the interviews, which were recorded with the consent of the participants.
The data was analyzed using the comprehensive technique for qualitative data, according to procedures described by Minayo 21 : transcription of interviews and organization of verbal reports, horizontal readings for cross-sectional elaboration aiming to classify data, extracts from participants' conversations for eventual comparisons, and exhaustive reading to identify units of meaning and categorization.Two categories were highlighted in this article: the challenges of home care for terminal patients and the polarization of physiotherapists between technicalism and humanism.
Results and discussion
Regarding the profile of the physiotherapists interviewed, six were women and four men, aged Bioethical conflicts: in physiotherapy home care for terminal patients http://dx.doi.org/10.1590/1983-80422017251176 between 25 and 41 years.Of the total, six were single, three married, and one divorced.Half of the physiotherapists were from the Federal District and the others from the states of Goiás, São Paulo and Minas Gerais.Respondents, with one exception, reported having religious beliefs.All of them practiced physiotherapy in the Federal District, four of whom had employment contracts within the private health network, one was a public servant of the Federal Department of Health, and the other five were self-employed.Of the ten physiotherapists, only one mentioned having attended a training course in palliative care, an aspect that may favor the development of technical and relational skills for care in the field in question.
Challenges of home care for terminal patients
According to the testimonies of physiotherapists, caring for people in terminal condition in the home environment has revealed a process permeated by feelings and emotional reactions triggered by polarized experiences, perceived at the same time as both difficult and rewarding.The feeling of anxiety was predominant in the conversation of physiotherapists.Professionals reported having experienced anxiety when perceiving patients' pain and suffering.The non-acceptance of death felt by patients, as well as by the professionals, was also a frequent emotion originating from the involvement developed throughout the therapeutic process.One participant said: "I was very distressed and that was very distressing to us, I was too distressed by that situation ...And she, what most moved us, was that she was not ready for death, she was not, she had not accepted yet that it was near "(Fernanda).
In Fernanda's account, the situation may constitute a bioethical conflict related to the principle of beneficence 12,22 because she was prepared to fulfill her professional duty to do good, to provide benefits through her therapeutic intervention, whether monitoring mechanical ventilation parameters and/or minimizing possible physical pain through thermo-photoeletric therapeutic resources.However, the professional felt distressed because the patient was terminally ill and nothing she could do within her therapeutic intervention would modify the prognosis.For Sadala and Silva 23 , in dealing with the finitude of life, health professionals face a situation feared by human beings, which manifests itself in feelings such as anguish and pain.In this sense, for the authors, dealing with death on a daily basis is extremely distressing and exhausting, giving rise to feelings such as impotence, frustration and insecurity in the face of patients' suffering and failure of professional actions 24 .
As Singer 3 points out, in these times of biomedical and biotechnology advances, one must rethink life and also death.The latter is governed by reflections such as the question of orthatanasia and the right to choose not to prolong suffering.It is important to prepare, in fact, not only the health team, but also the family members in the process of accepting death as a logical consequence of an irreversible and fateful process 3,16 .This awareness, however, is frightening and can be permeated by conflicts, because it refers to the finitude of all those involved -professionals, family members and patients themselves -who may hesitate to face this situation.
According to Moritz 25 , during academic education it is taught that physicians hold the greatest responsibility for healing, exhibiting a greater sense of failure and fear of failing regarding their patients' death.Physiotherapists are also prepared for the healing and reinsertion of patients in society, as illustrated by the words of Laís: "we do not have this training of palliative care, we were guided in college, we were trained to rehabilitate, to reinsert the individual in the society".This aspect further complicates the admission of finitude in the process of coexisting with death, since in addition to perceiving oneself to be mortal, professionals may feel they failed/unsuccessful because they did not have the possibility of healing the patient.
In general, physiotherapists are prepared in their training to act with beneficence -to relieve, diminish or prevent harm, provide and balance benefits versus risks and costs 6,11,12 -and, although the death of terminal patients is inexorable, it is often difficult for professionals to deal with the situation and to understand that, when the therapeutic resources are exhausted, they would not be acting with maleficence (causing damage).Dealing with death is not, in fact, an easy task in a society that legally sanctifies life 26 .
Implementing health services aimed at professionals can also ensure better preparedness and ability to cope with the process of losing a patient 27 .Albuquerque 28 reinforces this perspective when pointing out that, in order to humanize health care, professionals must also be considered, taking into account the needs arising from the professional practice itself.Biopsychosocial training can help them cope with difficult situations.In addition, professional training for the provision of palliative care is of paramount importance when we consider that life is an inalienable right and we must be prepared to safeguard it with all possible technical diligence.However, dying with support and dignity is not yet a right provided for in Brazilian law, which does not formally recognize the right to die 29 .This fact implies, unequivocally and consequently, the deontological duty of professionals to safeguard life 6,11 .
Despite the advances in biotechnology in recent years, little progress has been made in teaching and bioethical discussions in Physiotherapy courses, as pointed out by Badaró 15 and Figueiredo 30 , who verified the deficiency of bioethics teaching in the physiotherapy undergraduate course.In general, the professional education of physiotherapists reveals a lack of preparation for the topic "end of life", as pointed out by a recent survey that interviewed 222 undergraduate students from the physiotherapy course at the University of Brasília.The survey revealed that 79.7% of the interviewees from the first to the tenth semester reported not having participated, in any of the courses provided, concerning any analysis or discussions regarding death 31 .This data allows us to infer that conflict when dealing with patients' death arises from the moment physiotherapists encounter the reality of the professional monitoring of the terminal process, as Fernanda's words reveals: "I had no preparation for dealing with death, on how to deal with those patients who have no prospect of functional recovery, and so I think it's a great challenge to deal with those who have no prospect of healing".Prepared to promote healing, to act with beneficence and not maleficence and immersed in a society that denies death, professionals are thrown into an adverse work situation, for which delicacy and empathy are required, which can end up depleting their strength, causing depression and illness.
Bioethical conflicts may arise from the therapeutic choices available to the professional.
To what extent can one's conduct in fact bring benefits to the patient: to act with beneficence rather than maleficence?How to identify the limit of action without prejudice to the health of the professional?How to communicate with the health team maintaining an open relationship for joint and shared decisions, based on solidarity and cooperation? 16,22,26ysiotherapists, like other health professionals, are prepared to act with beneficence, preserving the life of their patients and reinserting them in society, as has been occurring since the teaching of history and fundamentals of physiotherapy in undergraduate courses.However, students are not involved in discussions regarding death and the loss of patients; usually they are not encouraged to talk about their feelings during and after care practices, or about the feelings of their patients and family members, which may lead to conflicts in decision making in treatments involving bioethical issues 22,32,33 .It is necessary for professionals to consider the limit of biotechnology for the maintenance of life, and that they are also prepared to accept that there are limits to therapeutic intervention, recognizing the fine line between benefits and damages, not only linked to the physical body, but also to the psychosocial dimension.
The physiotherapist between technicalism and humanism
The research pointed out polarity between eminently technical or humanistic performances, a challenge that fits with the reflection regarding bioethical conflicts.It was observed that the execution of technical procedures was predominant, using available means for the therapeutic work to be carried out without demonstrating in their speeches a concern to interact with patients as persons inserted in a psychosocial context, as shown in the following statements: "Usually when I start a terminal patient session, I first make an evaluation of the vital data (...) finishing the session I put the patient in a better posture so that he or she will facilitate the breathing work again" (Tobias); "Most of the terminal patients I attended required mechanical ventilation, so basically bronchial hygiene, then, this way, I really practiced bronchial hygiene with Ambu, aspiration in bed" (Amanda); "Daily I performed the evaluation of the patient, performed aspiration when necessary, thoracic maneuvers, postural drainage, passive mobilizations and stretching to avoid deformities, respiratory exercises" (Rubens).
Although the conduct described in these statements reflects the intention to act based on beneficence, through eminently technical and objective actions, it also points out the lack of appreciation of subjective aspects, from practices
Research
Bioethical conflicts: in physiotherapy home care for terminal patients http://dx.doi.org/10.1590/1983-80422017251176such as listening to patients or dialogue with caregivers and family members.The absence of these aspects can transform professionals into mere caretakers of diseases, as stated by Siqueira 34 , or in technicians to monitor apparatuses, which implies a potentially harmful behavior (maleficence).It is possible to see that the rigid observance of the technical aspects comforts professionals, who feels that they are discharging themselves from the task of providing care in the best possible way, without compromising their emotions and feelings by the intense contact with the situation of the finitude of patients.
What can be verified with this research is that, in professional practice, appreciation of the technique occurs, especially when physiotherapists providing home care for people in terminal condition perform their work almost without verbal interaction with patients or their relatives.In this case, the greatest concern of professionals, from the first approach, is to check physical signals and move to the stage of performing the work through the chosen technique(s).This approach emphasizes the distance between professionals and patients, turning the latter into objects of the technique of the former.In these situations, the risk of ethical infraction is accentuated, especially the lack of respect for the autonomy of patients.
In contrast, when professionals manage to ally technique and a more humanized attitude, both they and the patients enjoy enriching moments during therapy, bringing comfort to a situation marked by suffering and pain.This form of professional performance can generate even greater participation of family members, who are better oriented as to how to deal with the terminal condition of patients, implementing functional adaptations and sharing their own anxieties and apprehensions.In the care of the terminal patients, an important aspect to be analyzed, especially from the bioethical perspective, is strict technicalism, exercised in objective therapeutic practices, which limits the integration of the humanistic approach to care.Machado and collaborators 35 define this process as dehumanization, when the professional starts to see the disease and no longer the human being, when they begins to value the management of critical patients and discuss the clinical decision without entering the subjective universe.This occurs, for example, when they ignore the emotional and financial situation of the family.For these authors, to perceive the other is a question that involves a deeply human attitude 35 .
To break with dehumanization and practices centered on the model based on the cCartesian paradigm -which welcomes the object (or objective) and not the subject (the patient), the biological body and not the integral human body -it is necessary to emphasize the relational process in the therapeutic action, which can bring benefits and reduce vulnerabilities.Society today requires professionals capable of developing not only technical skills but, as Crippa and collaborators 36 point out, values such as compassion, sensitivity, dedication, and ethics.Barchifontaine 37 also points out that care is pertinent to the dignity and humanity of the patient and reinforces the ethical fields of simple attention, opens participation and solidarity and affects the way others are seen.That is, the relationship between health professionals and patients should be the focus of care and based on the recognition of human dignity and solidarity, which consequently leads to humanization.
In the cases reported by professionals Tobias and Amanda, the conflict of autonomy was evident, that is, the freedom to decide which behaviors to adopt, more technical or more human, depending on personal characteristics, and not only on professional training.There are professionals who do not want to bond or interact more with their patients, because they are there to carry out their work and move on to the next appointments.The use of verbal communication is also therapeutic, especially for patients with no possibility of cure, and implies quality of care, which is a safeguarded benefit for patients in the Universal Declaration on Bioethics and Human Rights 22 .
The humanistic dimension, in turn, appeared in the discourses of some of the participants associated with the appreciation of the quality of communication and with the interaction developed between physiotherapists, patients, patients' caregivers and families.In this research, it was verified through testimonials that physiotherapists value the fact of establishing and maintaining constant communication with patients.Interviewees reported that, at the initial moment of the physiotherapeutic care, they sought to turn the attention and interest to the patients' overall picture.To know how they felt, which included feelings and sensations regarding their physical conditions, explanations about the care provided to patients, as well as to identify the affective and relational climate present among family members.From this scenario, they defined the therapeutic action to be adopted in the intervention, as exemplified by excerpts from the participants' narrative: "To show him that he is very important to us ... I would come in, say hello, talk a little, even if he did not speak ... I would talk to him with or without answers" (Sheila); "Well, I arrive at the patient's home, I talk to the family member responsible, if the patient communicates verbally, I also do, of course, I talk a lot about how he/she has been since the last session, how he/she is feeling that day, if he/she is in pain, If there is anything special I could do on that day" (Laís).
In the reports from the professionals, it is observed that when the option is made to establish adequate communication channels with patients and their relatives, from the initial interaction and during the therapeutic intervention, the physiotherapist starts to participate in the patient's daily life.This approach makes them more sensitive and attentive to physical, emotional and psychological needs, thus helping to minimize the vulnerability of those under their care.Kovács 14 calls these professionals who deal with death, employing their professional diligence as well as biopsychosocial, as "palliativists", precisely for putting into practice the beneficence when a simple dialogue can contribute to the relief of pain and suffering.According to Coelho and Ferreira
38
, the conversation provides relief, conveys a sense of welcome and has a beneficial or therapeutic effect.
By means of proper evaluation, when interacting verbally with patients and relatives, professionals can perceive their characteristics and individuality, know their history, not only that of the disease, and may interact with patients in this universe of senses and meanings 18 .On the other hand, professionals should recognize patients' desire for silence, but should not discourage them from saying what they feel, leaving them at ease, so that they can trust the professional who cares for them and accompany them at this difficult time.This established connection can help professionals build humanistic values based on compassion, mutual respect, solidarity and integrity.Given the specific characteristics of their performance, to provide quality care physiotherapists need to touch and talk topatients, approaching them physically, psychologically and emotionally.
For O'Sullivan and Schmitz 39 , people are not born with these values, but these are acquired through personal and interpersonal experiences.For the authors, patients' values are important factors to be considered in their treatment, since they reach physiotherapy in diverse stages of their lives, and with unique histories 40 .How does one identify these values if physiotherapists do not establish a more humanized relationship, based on dialogue, on listening and on the process of welcoming and bonding?Therefore, it is necessary to interact not only with patients, but also with their family members, and to build humanized relationships, with the potential to prevent bioethical conflicts.Professionals need to feel safe, be prepared to act in extreme situations, and to understand that in these situations there is a fine line between beneficence and maleficence and between technicalism and humanism.
Undoubtedly, the technical dimension is essential for good physiotherapeutic practice.It is the application of appropriate procedures that will allow the maintenance of the integrity of the patients' body while working, for example, the prevention of severe pain and deformities due to the great time restricted to the bed.However, the technique must be permeated by the relationship of affection, by the perception of patients' psychological and existential conditions.After all, it is this humanistic relationship that will allow professionals to recognize that the illness is not strictly a biological phenomenon, but a biopsychosocial, and spiritual phenomenon as well.When professionals value human interaction with patients in their therapeutic practice, they begin to pay attention to the sick person and not specifically to the disease. 34This passage from Samuel's account exemplifies the idea: "The person is terminal but, while he/she is there, you can talk to him/her, you can touch him/her, you can give affection, you can do a lot of things, you can act like a person on his/her side, you cannot treat them like a plant, he/she is a human being." Another important aspect in the analysis of question of technicalism is that it seems to place professionals as the center of the care process.Moritz 41 emphasizes aspects of the technicist education and discusses the development of health professionals' defense mechanisms when dealing with death.The author emphasizes that, at the beginning of their training, medical students have contact with corpses in the study of anatomy and are faced with a disfigured body, blackened by the formaldehyde, in which students can barely identify a human being that has passed through life and felt the emotions that mark them as an individual 41 .We can affirm that this context is not different for physiotherapy students.
Physiotherapists need to be prepared to recognize the psychosocial needs of their patients, not only the physical ones, even if they are in a vegetative state, as was observed in Samuel's speech, which emphasizes the need for a more humanized and integrated treatment: "he/she is not a plant, he/she is a human being".When patients are seen as what they are -human beings -, the bioethical principle of human dignity materializes, which, as Albuquerque 42 points out, is one of the core concepts of patient-centered cares.Regardless of the disease stage, advanced or not, human beings needs to feel accepted, understood and valued 43,44 .To see patients in the first place, not the illness, is to attribute fundamental value to the human being in terms of his/her dignity and integrity.
The concept regarding the interface between bioethics and patients' human rights is still recent.More discussions and advances are needed to guide health practices, to prepare and sensitize professionals to minimize the conflicts arising from facing limit situations 45 .
Final considerations
This study does not exhaust the topic, but reveals some bioethical conflicts often experienced in the daily work of physiotherapists in the care of terminally ill patients and their relatives.Situations in which the limits for the use of therapeutic resources are translated into polarized attitudesof approach or distancing -and the challenge of promoting care based on humanization and human dignity.In providing humanized care, professionals are exposed to the anguish and existential suffering experienced by patients in the process of dying.Without being adequately trained to deal with these situations, and without support to manage them, professionals can succumb to stress, which will prevent them from exercising their activities effectively and, worse, can lead professionals to chronic illness.
There is a great challenge for the institutions that train physiotherapist to include in the whole formative process, in an integrated way, knowledge that bases health care bioethics on autonomy, dignity and human rights.It is important to establish a guideline for training these professionals in the logic of permanent education, both in home care programs and other health tools.We consider that, in order to face the issue decisively and appropriately, it is indispensable to include the discussion of the issue of death and of dying in professional training, to promote permanent training to work in palliative care and to provide psychological support to professionals, when necessary.These tasks should be adopted by educational institutions, promoted by the professional bodies and fostered by and for all professionals who are concerned with the ethical practice of the profession.
|
2018-12-07T14:43:59.968Z
|
2017-04-01T00:00:00.000
|
{
"year": 2017,
"sha1": "55ac74181fc7a6b1beebb788182bb7d268d57359",
"oa_license": "CCBY",
"oa_url": "http://www.scielo.br/pdf/bioet/v25n1/en_1983-8042-bioet-25-01-0148.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "67123dddb9cf0a95ad8d9850ba0dc3ea587100b2",
"s2fieldsofstudy": [
"Medicine",
"Philosophy"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
249238183
|
pes2o/s2orc
|
v3-fos-license
|
Laboratory study of the effectiveness of confider in controlling leaf-hoppers on a crop of cowpea
Cowpea leafhopper Amrasca biguttula is one of the important pests that afflict the cowpea crop in Iraq and cause economic losses. Therefore, chemical control was studied in the laboratory and its effectiveness in control was demonstrated, as a pesticide was used imidacloprid ;Confider of the Neonicotinoid schemical group Confider concentration was used for spray treatment 25ml/L,50,75,100ml/l. The concentration used for soil treatment is 100g/ml,200,300 ,400g/ml. The results were the treatment of the lowest hatching rate was obtained in leafhopper eggs for spray treatment when using a concentration of 100ml (reached 79.44%), observed that the average of mortality rate resulting from the use of a the pesticide Confider reached to (33.47%) , It is noted that the highest mortality rate was achieved after the passage of time 72 hours of treatment and reached (39.73%).
Introduction
Cowpea Vigna unguiculata L. is a leguminous crop, grown in the semi-arid areas within the tropical zone. It has originated from Africa [1], then moved to other continentals including Asia, Europe and Central and Southern America [2]. Due to high protein and carbohydrate contents, cowpea is cultivated in Iraq as a food source to substitute heat intolerant crops during the dry season. It can be costumed as seeds or green pods. The estimated Iraqi production of cowpea was 246and 46200 tons of dry seeds (FAO 2021) and green pods (CSP 2021), respectively. Cowpea production can be threatened by several pests including cowpea jassid ,Amrasca biguttula biguttula Ishida (Hemiptera: Cicadellidae). This insect can cause damage on cowpea through sap-feeding on the lower leaf surface resulted in phototoxic symptoms known as "hopper burn", at mouth part penetration sites [3,4]. Other symptoms include crinkling around margins and upward curling of leaves, leaf tips and margins develop necrotic areas(CABI 2022), or abnormality and browning of vascular bundles [5].This research shows that leafhopper feeding on tree sap has a significant direct impact on the chlorophyll content of grape leaves [6]. Indirect damage may occur through transmission of viruses and phytoplasma disease during insect feeding [7]. A. Biguttula biguttula Ishida has been reported from Iraq for the first time in 2017 [8]. It was found this leafhopper impact many host plants including okra Ablemoschus escu-lents, eggplant Solanummelongen a,pepper Capsicumannum, cowpea and mallow Malvaparvi flora, causing serious losses. In Iraq, this leafhopper had 5 nymphal instars ranged between 6.66-9.33 days,whereas, adult longevity ranged from 15-19 days Chemical control has been used to control leafhoppers [9]. Plant extracts are also used in the fight against insects [10] .Several neonicotinoid insecticides, were used against sap-feeding insects [11].Pesticides have residues on crops that have negative effects on human health [12] .This insect can be a serious threat to cowpea production, through direct damage or as a potential vector to phytoplasmal diseases, in Iraq [13]. Thus, this study was initiated to confirm the identification of A. Biguttula biguttula, collected from cowpea, based on molecular approaches and to control it using imidacloprid leaf spring and soil treatments.
Materials and Methods Collect the leafhopper
A number of cowpea leaves were collected randomly from the infected cowpea field in Baghdad governorate / fields of the College of Agriculture , and placed in breeding cages for laboratory use at any temperature.25m and humidity65% in testing the effectiveness of the pesticide on all stages of the insect.
Bio-evaluation of pesticides
It was used in laboratory experiments to evaluate the effect of the pesticide in the leaf aphid on the cowpea crop. It is a common cultivar grown in Iraq. It is continuously prepared by planting the seeds of this variety in small pots with a diameter of 12 cm and a height of 12 cm containing sterile soil. After germination of the seeds, the plants were thinned to one seed for each pot. Seedlings to the stage of four true leaves were used in life tests. The pesticide has been evaluated Confider of the chemical onicotinoids group on all phases of the leaf hopper in two ways: 1-Treatment of spraying the vegetative system foliar application 2-Soil treatment application Confider concentration was used for spray treatment 25 ml/L, 50 ml/L,75 ml/L and100 ml/L. The concentration used for soil treatment is 100 g/m, 200 g/m, 300 g/m and the400 g/m Pesticide granules were added directly to the soil. The source has been approved [14]in that. The percentages of mortality were calculated in all life tests and the results were corrected according to an equation ( Corrected death rate % = Number of insects in treatment before treatment Number of insects in comparison before treatment X 100 [15].
Assay of Biological Pesticide Treatment of Leafhopper Eggs
Seedlings of the prepared leaves were used as mentioned in the paragraph above and it was isolated into three groups as follows: The first group of seedlings was transferred to leafhopper breeding cages (laboratory culture), which contain large numbers of leafhopper adults. were left there for a period of 24 an hour, which is sufficient time for eggs to be laid on them ,and then they were taken out of the cages after moving the plants a little to remove the adults of the leaf hopper from them , They were determined100 an egg on each seedling by using a microscope, and the rest of the eggs were removed from the leaves using a fine needle, Therefore the seedlings were placed in the incubator for four days. The second groups of seedlings were transferred to the breeding cages also a day after the first group's seedlings were taken, out and they were removed after 24 hours. In the same way, it was determined (100) an egg in each seedling and the seedlings were placed in the incubator for two days.
The third group of seedlings was transferred to the breeding cages a day after the seedlings of the second group were taken, out and they were also removed af-ter24hour, in the same way, was determined (100) an egg on each seedling. Thus, the eggs were obtained at the age of one day, three days, and five days. The seedlings containing the eggs were treated with the pesticide and the concentrations shown above. with four replicates for each concentration. Spraying was done with a plastic hand sprayer, the volume of one liter, until the stage of dripping . Run-off from the surface of the leaf, either the control treatment was sprayed with water only. The seedlings were left to dry for an hour, then they were returned to the incubator, and the number of eggs hatched on the seedlings was calculated one day after the eggs hatched in the control treatment for each group. The corrected hatching percentage was extracted.
Treatment of Leafhopper Nymphs
Leafhopper nymphs of different stages I, II, and V were obtained by inserting cowpea seedlings grown in pots into the insect's breeding cages for two years.24 an hours after laying eggs on the seedlings, they were isolated and divided into three groups. The duration of the incubation of the nymphs was determined after hatching for each group within the conditions of the incubator so that each group of the seedlings contained nymphs of the leafhopper in a certain stage. The nymphs of each stage were treated with the pesticide and the concentrations shown above by spraying using a hand sprayer, at the rate of four replicates (seedlings) for each concentration, each seedling having 50 nymphs were previously identified using a microscope. The extra nymphs were removed from the leaves with a small brush, as for the control treatment, they were sprayed with water only, and the seedlings were left for an hour to dry, then they were returned to the incubator. The seedlings were tested after (24, 48, 72) an hour to calculate the cumulative death rate for each treatment and all nymph instars.
Treatment of Leafhopper Adults
Cowpea seedlings grown in plastic pots were sprayed with the pesticide concentrations shown above and with four replicates for each concentration, the control treatment seedlings were sprayed with water only. The seedlings were left for an hour to dry, and transparent plastic tubes of diameter were placed9cm and height14 cm The pots so that the lower nozzle of the gusset is fixed on the soil of the pot, while the upper nozzle has been closed with a piece of boring cloth fixed with a rubber band. Using an eyedropper transferred20adult leafhoppers of the age (24-48) an hour into each bell, then the bells were placed in the incubator. The percentage of mortality was calculated after6, 24, 48,72 hours of treatment.
Statistical analysis
All experiments in the laboratory were carried out according to a completely Randomized design (CRD) [16]. The ready-made program (SAS) 2001 was used to analyze the statistical data and by computer automated.
Results and Discussion Bioassay of pesticides on leafhopper phase Treating Eggs
The results showed that there was a slight decrease in the percentage of hatching of leafhoppers eggs treated with the pesticide by both methods of spraying and soil treatment, with significant differences from the control treatment (Table 1). (The lowest hatching rate was obtained in leafhopper eggs for spray treatment when using a concentration of 100ml (reached 79.44%). (with significant differences from the rest of the concentrations, and the highest hatching rate was obtained in the leaf hopper eggs of the spray treatment when using a concentration of 25ml and reached (85.33%). Mean while the percentage of hatching in the control treatment was (91.89%), and as pointed out [17] they found that the pesticide Confider used spray on cotton plants had a relatively limited effect on the pest eggs For soil treatment the lowest hatching percentage was obtained in leafhopper eggs when using a concentration of (400mg/l (accounted for) 84.78%), (there was a significant differences from the rest of the concentrations .Mean while the highest hatching rate was obtained in leafhopper eggs for soil treatment when using a concentration of (100mg/1 (and reached (88.67%) with a significant difference from the control treatment. In the soil treatment the percentage of hatching in the control treatment was (92.22%). The hatching rate was also associated with the age of the eggs treated by the two methods of spraying and soil treatment, as it was found that the eggs treated at the age of one day were more sensitive and the hatching rate was lower than the eggs treated at the age of three days. Hatch to the three ages of spray treatment83And the84.2And the85.93Straight . the percentage of hatching for the three ages of soil treatment 85.93,87.93 , 89.53 .
Treatment of Nymphs:
The results showed that the two methods of using the pesticide Confider by spraying and soil treatment, it was effective against leafhopper nymphs and for all nymph stages (Table 2). It is also noted that there is a discrepancy between the mortality rates achieved depending on the concentration used, and they range from low concentration to high concentration. For the first nymph stage, observed that the average of mortality rate resulting from the use of a the pesticide Confider reached to (33.47%). and there was a significant value. As for the second nymph stage, it is noted that the mortality rates achieved for all pesticide concentrations took the same path, but with lower values and with a significant difference. The rates of mortality rates resulting from the use of a confider pesticide reached (31%). There was a greater decrease in the mortality percentages of the nymphs of the fifth stage, with a significant difference, as it was in the first and second stages, where the death rates of the Confider pesticide reached (26.67%). From these results, it appears that there is a significant difference between the nymph stages of the leafhopper in the degree of their effect and sensitivity to the used pesticide, which was inversely associated with the age of the nymph stage. As mentioned by [18]The advanced nymph stages of the whitefly are less sensitive to the growth regulator the recent phases, as the corrected mortality rate was reached when using the growth regulator in concentration 0.5ml/1 for phases I to IV (92.6,87.9,85.4% ) respectively. Treating the soil, where an increase in mortality rates was observed, using a Confider pesticide with increasing user concentration. Also, a significant difference was found between the nymph stages in the degree of their sensitivity to the pesticide, as the first nymph stage was the most sensitive. The general rate of death rate in its individuals reached (48.8%) , while the mortality rate was in the control treatment6%. In the fifth nymph stage, it is the least sensitive and a significant difference, as the percentage reached (42.4%). and the mortality rate was in the control treatment 2.5%. The older nymph stage , the less sensitive it is to the pesticide. This may be because the nymphs of the first stage are still weak and vulnerable, so they remain more influential than the nymphs of the fifth stage, being the largest, with the development of their defensive means that make them more tolerable. As found [19].when evaluating the two pesticide preparations Imidacloprid 2.5% 240 in the soil and sprayed on the plants, both treatments were effective against whiteflies, with the soil treatment being superior to the spray treatment.
Treatment of Adults
The two treatments were spraying and soil treatment of the pesticide Confider highly effective against leafhopper adults with significant differences between them ( Table 3 ) . The mortality rate achieved from the concentrations of the pesticide Confider was (39.73%) after 72 hours of treatment. It is also noted from the table that the percentage of mortality rates differed according to the concentration, as these percentages increased with the increase of the concentration used. The increase in death rates also gradually increased with time. It is noted that the highest mortality rate was achieved after the passage of time 72 hours of treatment and reached (39.73%). That mortality occurs in the first periods of exposure in the case of spraying is due to the effect of contact with this pesticide, and as time progresses, the cumulative death rate increases, as systemic action is [ 20].For Soil treatment , the percentage of mortality achieved from the use of a pesticide Confider (31.48%). It is also noted that the mortality proportions increased with the increase in concentration and with time, and for all concentrations of the pesticide, it was (7.2%) after24 An hours of treatment and then increased after 36 hours to be (15.87%) and almost doubled after 48 hours and reached (31.67%) and then reached its highest value after 72 hours when (47.47%). This is explained by the fact that this pesticide has a systemic effect, so when the soil is treated with it, it will take time for the pesticide to be absorbed by the root system of the plant, and then it is transmitted to the different parts of the plant until it reaches the pest by absorbing the plant juice, and when it reaches the site of impact in the insect's body, which are the receptors in the device Central nervous system paralysis and then death. where. it was found [21] with the superiority of a pesticide Confider treating the soil over the spraying treatment, the mortality rate achieved in adults of the white fly reached (93,84%) when used in concentration 25mg/l of water. Through laboratory experiments, the effectiveness of the pesticide was shown in reducing the number of leafhoppers in the nymph stage and adults, and the duration of its effectiveness reaches four weeks, so it can be used in integrated management and contributes to increasing production and protecting the crop.
|
2023-06-04T15:08:08.181Z
|
2023-03-23T00:00:00.000
|
{
"year": 2023,
"sha1": "9e3dc2ee50f5b26ce3cc07c7856d5b7d5f52e93d",
"oa_license": "CCBYNC",
"oa_url": "https://journals.uokerbala.edu.iq/index.php/Agriculture/article/download/1118/493",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "5a10ef801baf5d0d585b7d4594441b07ca33a937",
"s2fieldsofstudy": [],
"extfieldsofstudy": []
}
|
14616971
|
pes2o/s2orc
|
v3-fos-license
|
Ultrasonographic renal sizes, cortical thickness and volume in Nigerian children with acute falciparum malaria
Background Utility of sonographic assessments of renal changes during malaria illness are rarely reported in African children in spite of the high burden of malarial-related kidney damage. Methods In this case–control study, renal sizes, cortical thickness and volume of the kidneys of 131 healthy children and 170 with acute falciparum malaria comprising 85 uncomplicated malaria (UM) and 85 complicated malaria (CM) cases, measured within 24 hours of presenting in the hospital were compared. Results The mean age of children with UM, CM and control groups was 49.7 ± 26.2 months, 50.7 ± 29.3 months and 73.4 ± 25.5 months, respectively (p < 0.001). The mean right kidney length of CM group was higher than control by 0.41cm (95% CI = 0.16, 0.65; p < 0.001) and UM by 0.32 cm (95% CI = 0.02, 0.62; p = 0.030). Similarly, mean left kidney length of CM was higher than control and UM by 0.34 cm (95% CI = 0.09, 0.60; p = 0.005) and 0.41cm (95% CI = 0.09, 0.72; p = 0.006), respectively. Estimated mean renal volume of the CM group was significantly higher than control group by 7.82 cm3 for right and by 5.79 cm3 for left kidneys respectively; in the UM group by 9.31cm3 for right and 8.87 cm3 for left kidneys respectively. Conclusion There was a marginal increase in renal size of children with Plasmodium falciparum infection, which worsened with increasing severity of malaria morbidity. Ultrasonography provides important information for detecting renal changes in children with acute malaria.
Background
Malaria remains a public health problem with significant morbidity and mortality posing major economic and developmental challenges in sub-Saharan Africa [1]. Malarial illness may take a variety of clinical forms, differing in pattern and severity, from uncomplicated to severe malaria. Despite efforts at controlling malaria, two out of ten children admitted to children emergency units suffer from severe forms of malaria and/or its complications [2]. Worldwide, about 90% of the reported cases and 85% of the deaths have been attributed to malaria in sub-Saharan Africa [3], where Plasmodium falciparum infection is responsible for almost all the morbidity and mortality.
The pathophysiology of severe falciparum malaria is complex and multifactorial with parasitized red blood cell destruction resulting in the release of haemoglobin and other toxic metabolites, up-regulation of cytokines, acute phase reactants all playing important pathogenic roles which may cause inflammation, tubulo-interstitial damage, glomerulonephritis and pigment nephropathy, all of which may lead to acute kidney injury (AKI) [4]. Moreover, previous studies [5,6] have demonstrated that cardiac output in children with severe malaria is adversely altered, but it is not clear from literature to what extent this alteration affects the kidneys. Data from Ghana [7] and Kenya [8] indicated that signs of shock, such as capillary refill, are common in children suffering from severe malaria. These adverse effects, though secondary rather than primary, support potential reversible ischaemic damage to the kidneys during acute malarial illness. Repeated P. falciparum infections can also result in nephron loss leading to chronic renal disease, including nephrotic syndrome, often non-responsive to steroid treatment [9][10][11][12].
Though the occurrence of haemoglobinuria and AKI is a significant complication in children with malaria, only a few studies have reported the magnitude. A recent study reported that as many as 19.1% of children with acute falciparum malaria developed haemoglobinuria [13]. AKI has been attributed to malaria in 13.7% [14] and 46.2% [15] in Nigeria. Weber et al. [16] in The Gambia observed that 25% of the cerebral malaria cases and 4% of children with mild malaria had AKI. Mortality from malaria-related AKI could be as high as 23% in endemic area [17].
Recent studies have shown that cases of severe malaria and its complications are on the increase probably because of the emergence of drug resistant parasites [18,19]. Early detection and monitoring of effects of malaria on the kidneys are important because timely interventions may prevent progression to irreversible damage. However, this is often very challenging in Africa mainly because of inadequate laboratory facilities. Currently, decisions relating to care of children with acute malaria are mostly guided by physical and biochemistry findings. Apart from the exorbitant cost of biochemistry tests, the turn-around times for getting reports from the laboratory often cause delay in treatments where available. Though ultrasound equipment may be available, sonological assessment of the kidneys in children with malaria is not routinely done, despite the fact that studies have validated the use of ultrasound in assessing renal functions in both clinical and epidemiological studies [20][21][22]. While other modalities can be used to determine kidney volume [23,24], ultrasound is preferred in most resource poor settings because it is relatively affordable and noninvasive. It is however, not known whether any change in renal sizes could be detected by using ultrasonographic scanning. This study, therefore, compared the length, width, anterio-posterior diameter and cortical thickness determined using ultrasound as well as estimated volume of kidneys in children with complicated and uncomplicated malaria with those of healthy children with no malaria parasitaemia.
Study design and setting
The study was case-control in design. Cases were recruited from the Children's Clinics and Emergency Unit of the University College Hospital (UCH), Ibadan, Nigeria, while controls were selected among children living in the same neighbourhood of the respective cases. The UCH is a foremost tertiary referral hospital located in Ibadan, South-West of Nigeria. The yearly admissions in the Department of Paediatrics, UCH are approximately 2,500 with about 11% of them being cases of complicated malaria. Based on the 2006 National Census, Ibadan has an estimated population of 2,550,393. Children less than 15 years constitute about 19% of the population of Ibadan.
Study population and sampling
Cases were children who presented with symptoms, signs and positive blood smear for malaria parasite and control were children of similar socioeconomic status who had no symptoms and negative blood smear for malaria parasite as control. Cases were grouped into uncomplicated and complicated malaria as defined by the World Health Organization [25]. Only eight children whose caregivers declined consent were excluded, but they received standard treatment according to the National guidelines on treatment of malaria.
Sample size and power calculation
At the design stage of the study, it was assumed that utilizing ultrasound, there would be a mean difference of 13.0 cm 3 in renal volume (this value was obtained from a pilot of 10 children) between cases and control. Therefore, studying unmatched 170 cases and 131 controls gave a statistical power (1-β) of over 90% at 95% confidence interval (CI).
Data collection and laboratory procedures
Trained research Nurses and assistants administered a pre-tested structured record form to parents and their children at the time of recruitment. Each child was examined by the paediatrician with socio-demographic data, weight and height recorded. Laboratory investigations carried out included blood smear for malaria parasite counts, plasma urea and creatinine. The malaria parasites were counted against 200 white blood cells (WBC) and parasite density was calculated for each patient based on an assumed total WBC of 8,000/μL of blood [26]. All children with positive blood smear for malaria parasite were treated according to the Nigeria national anti-malarial treatment guidelines. Haemoglobin type of all participants were determined using gel electrophoresis method [27].
Ultrasonographic procedures
All subjects had both kidneys scanned within 24 hours of presenting in the hospital using a portable Micromax Sonosite Inc. Bothell, WA, USA, with 5-8 MHz curved array transducer. All participants were scanned in the supine and decubitus positions in the longitudinal and transverse planes for length, width, anterio-posterior diameter and cortical thickness measured in centimeters. The liver and spleen were used as acoustic windows for the kidneys on the right and left respectively. The acquired measurements were recorded in a data form. Two certified sonologists performed the scan independently at each visit. These sonologists were blinded to the laboratory test results. The degree of agreement between findings reported by the sonologists was evaluated with the initial ten patients scanned (k = 0.9) in the pilot study.
Variables, data handling and analysis plan
Data on kidney length, width, anterio-posterior size and cortical thickness were analysed using SPSS 16.0 statistical software (SPSS Inc. IL., USA). Kidney volume was estimated using the ellipsoid formula [23,28]. Renal sizes are known to vary with age and height [29], and since the three groups (control, UM and CM) differed significantly in age, ANCOVA was used to compare mean values of renal sizes adjusting for age. The Bonferroni procedure was adopted to manage Type I error that arose from multiple comparisons.
Ethical considerations
Participation in the study was completely voluntary and based on written informed consent from child caregivers and assent of the children. Parents were made to understand that they were free to withdraw their consent at any time, and that they will continue to receive standard level of care even in such situation. Privacy of participants was maintained by using serial numbers on the case record forms. The study protocol was approved by the University of Ibadan/University College Hospital Ethical Review Committee.
Clinical data
One hundred and seventy children with confirmed P. falciparum malaria comprising 85 uncomplicated and 85 complicated malaria cases and 131 healthy children with negative blood smear for malaria parasite participated in this study. There were more male than female children with malaria illness (M: F = 1.4: 1); ( Table 1). The mean ages of children with UM (49.7 ± 26.2 months), CM (50.7 ± 29.3 months) and control group (73.4 ± 25.5 months) were significantly different (p = 0.001); ( Table 1). The mean weight of control (14.6 ± 5.2 kg), UM (14.6 ± 4.1 kg) and CM (15.1 ± 5.4 kg) were not different (p = 0.056). Also, there were no significant differences in the mean heights of the three groups (Table 1). Only three had history of passing dark urine during the malarial illness and they all had positive results for blood using urine dipstick test. Other features of severe malaria documented among the CM group included: altered level of consciousness (n = 27), severe anaemia (n = 42), multiple convulsions (n = 7) and respiratory distress plus acidosis (n = 9).
None of the study participants had signs of shock. Five children among the CM group also presented with oliguria (urine output <1.0 ml/m 2 /hour) which resolved within the first 24 -36 hours of admission following administration of intravenous fluid in addition to other treatments. All the 42 children who had severe anaemia (haemoglobin <5 g/dl) were transfused with packed red blood cells at 12 to 15 ml per body weight (kg). However, no study participants had clinical signs of dehydration at the time of ultrasonographic scanning. All the four deaths recoded in this study had complicated (cerebral) malaria giving case fatality rate of 4.7% (4/85).
Laboratory data
Mean values of malaria parasite counts for UM and CM were as shown in Table 1 with the CM group having a significantly higher counts than UM. Children in the control group had no malaria parasitaemia. The mean values of serum urea of children with CM (12.1 ± 9.0 mmol/L) was significantly higher than healthy control (6.5 ± 2.2 mmol/L) and UM (6.8 ± 2.8 mmol/L) groups (p < 0.001). Mean values of serum creatinine of the control (44.2 ± 8.8 mmol/L), UM (53.0 ± 17.7 mmol/L) and CM (61.9 ± 53.0 mmol/L) were significantly different with that of the CM group being the highest (Table 1). Of all the children who took part in the study, only 5 of those in the CM group had serum creatinine > 88.4 mmol/L. However, the mean renal volume of those 5 children with creatinine > 88.4 mmol/L was significantly higher than their counterparts in the same CM group who had normal creatinine (<88.4 mmol/L) by mean differences of 27.9 cm 3 for right kidney and of 18.2 cm 3 for the left. All the study participants had haemoglobin AA type.
Renal ultrasound findings
The mean renal length, cortical thickness and estimated volume were compared between the right and left kidneys ( Table 2). The age adjusted mean renal length of the left kidney in the control, UM and CM groups were significantly longer than the right in the corresponding groups ( Table 2). The age adjusted mean renal width values in the control, UM and CM groups were slightly larger on the right than left but these differences were not statistically significant in all groups ( Table 2). The adjusted mean A-P diameters however were not significantly different on both sides in control and UM groups but the right kidney had significantly higher A-P diameter of 4.05 cm compared to the left A-P diameter of 3.8 cm than left in the CM group (p < 0.001). The mean cortical thickness was significantly higher on the left than right in the UM and CM groups. No significant difference was however found between the adjusted renal volumes of the right and left kidney in all groups.
The estimated and adjusted mean values of kidney size, cortical thickness and renal volume are shown in Table 3. After adjusting for age, the right kidney length of CM group was significantly longer than control by 0.41 cm (95% CI = 0.16, 0.65; p <0.001) and UM by 0.32 cm (95% CI = 0.02, 0.62; p = 0.030) while there were no significant differences in left kidney length of control compared with UM and CM. Similarly, the mean left kidney length of CM was higher by 0.34cm (95% CI = 0.09, 0.60; p = 0.005) and 0.41 cm (95% CI = 0.09, 0.72; p = 0.006) for control and UM respectively. After adjusting for age, the mean width of the right kidney of control was significantly higher than UM by 0.20 cm (95% CI = 0.03, 0.36; p = 0.015), but not different from that of CM group. Also, the mean width of the kidney of CM was significantly wider than those of UM group in both right and left kidneys by 0.23 (95% CI = 0.03, 0.43; p = 0.018) and 0.22 (95% CI = 0.01, 0.43; p = 0.032) respectively. Conversely, comparisons of the mean AP diameters of kidneys of control, UM and CM showed no significant differences. While the control group had significantly higher mean cortical thickness than UM group (mean difference = 0.06cm for both kidneys), these values were significantly lower than those of CM in both right and left kidneys by 0.05cm (95% CI = 0.0, 0.11; p = 0.035) and 0.05 (95% CI = 002, 0.11; p = 0.037).
The adjusted mean renal volume of control and UM were not significantly different. The adjusted mean renal volume of the CM group was significantly higher than control with mean differences of 7.82 cm 3 for the right kidney (p <0.001) and 5.79 cm 3 for left kidney (p = 0.026). Similarly, adjusted mean renal volume of the CM group was significantly higher than UM with mean In all increased parenchymal echogenicity of the kidneys were found in 12 children with complicated (14.1% of 85), eight (9.4%) had it in both kidney while four (4.7%) had it in the right kidney only. Increased parenchymal echogenicity of the kidneys was found in none of the control as well as those with uncomplicated malaria. There was no significant association between parasite counts and any of the kidney sizes. The mean renal length, cortical thickness and estimated volume of the three children who had dark coloured urine (haemoglobinuria) were not significantly different from others. Similarly, the mean values of renal sizes among four deaths were not significantly different from other children.
Discussion
Many studies have reported the effects of malaria on the kidneys [25], with most describing biochemical indices as measure of malaria effects. The present study, it would appear, is the first to utilize ultrasound to assess renal sizes associated with acute falciparum malaria in Nigerian children. In this study, the average renal length of children with acute malaria whether uncomplicated or complicated was longer on the left than right while the left and right kidneys were similar in width and volume in the same individuals. Also, renal cortical thickness of the left kidney was slightly more than right in children with malaria. Though previous studies have shown that left and right kidneys were similar in sizes among healthy children except in length with the left being longer than right kidney in health and disease conditions [20,30,31]. It was difficult to get appropriate comparisons for the renal length in existing literature.
Moreover, the present study showed that UM group had increased length, width and cortical thickness in both right and left kidneys but not in the volume compared with healthy children. On the other hand, children with complicated malaria had increased renal lengths, width, and volume when compared with uncomplicated and control groups. These increased renal length, width, and volume were more in the CM group than UM. This finding suggests that the change in renal dimensions may have worsened with increasing severity of malaria illness. However, the relationship between renal length and volume has been controversially reported in previous studies. While Thakur et al. [32] in their study of 18 adult patients concluded that renal length as measured on CT scan is not a good predictor of renal volume in adult patients, some other authors recently reported good correlation between these two indices [23,33]. Widjaja et al. [33] reported that including parenchymal cortical thickness improved the prediction of renal volume. The increased cortical thickness in the UM and CM may have contributed to the overall renal volumes in children who participated in the present study. Though the observed increased renal sizes in complicated malaria compared with mild malaria and healthy children were not surprising, the exact mechanism is not clear from our study. It is plausible that this observed difference in renal size may be due to the direct and indirect effects of severe manifestations of P. falciparum infection in the patients. Previous studies have shown that wide spread sequestration of parasitized red blood cells, excessive release of inflammatory cytokines and "sludging" effects of the parasite cause widespread swelling in many end-organs including the kidneys [25]. In the present study, there was no association between parasite counts and ultrasound measured renal sizes. This find is in line with the report by Dondorp and colleagues [34], who showed that peripheral parasitaemia is a poor reflection of whole body parasite mass.
One issue that restricts the generalization of findings from this study is the fact that cases and control were not effectively matched in terms of age and height. These two factors are important correlates of renal sizes. However, children who participated in the study had relatively comparable renal function as determined by the plasma creatinine and urea levels. Although both creatinine and urea levels are not reliable measures of renal function because they do not pick up subtle changes and there is also a significant time lag between onset of injury and elevation of plasma creatinine.
Magnetic Resonance Imaging and Computed Tomography have been shown to give better measures of renal length and volume using the disc-summation method [24], but in resource poor countries, these equipment are limited to very few tertiary health institutions and where available the cost is prohibitive. Ultrasound will continue to play a major role in the assessment of kidney sizes especially in children. Findings from this study therefore provide the basis for use of ultrasonographic examinations in the diagnosis and follow-up of children suffering from malaria especially severe falciparum malaria. It may aid in early detection and improve prognosis of renal complications associated with malarial illness. Considering the fact that ultrasound is cheap, widely available, and devoid of ionizing radiation, its use in the evaluation of children with acute falciparum malaria may be recommended. It is likely that the use of ultrasound may provide baseline information for further assessments of the kidneys during follow up of severe malaria in children in endemic areas.
|
2017-06-23T05:22:16.595Z
|
2013-03-13T00:00:00.000
|
{
"year": 2013,
"sha1": "0edd48b06d4cdbcdcb0a5f8496e5aaa966ca0938",
"oa_license": "CCBY",
"oa_url": "https://malariajournal.biomedcentral.com/track/pdf/10.1186/1475-2875-12-92",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a3da7efa7bc3b7ee2deda7491a31499f27bb251e",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
256202530
|
pes2o/s2orc
|
v3-fos-license
|
Stress tolerance of Xerocomus badius and its promotion effect on seed germination and seedling growth of annual ryegrass under salt and drought stresses
Comparative evaluations were conducted to assess the effects of different pH levels, NaCl-induced salt stress, and PEG-induced drought stress on the mycelial growth of Xerocomus badius. The results showed that X. badius mycelium grew well at a wide pH range of 5.00 ~ 9.00. Although the mycelium remained viable, mycelial growth of X. badius was significantly inhibited with increasing salt and drought stresses. Furthermore, a soilless experiment in Petri dishes was performed to investigate the potential of X. badius to induce beneficial effects on seed germination and seedling growth of annual ryegrass (Lolium multiflorum Lam.) under salt and drought stresses. Seed priming with X. badius enhanced the seedling growth of L. multiflorum Lam. under NaCl-induced salt stress and PEG-induced drought stress. However, X. badius did not significantly improve the seed germination under non-stress and mild stress conditions. It suggested that X. badius inoculation with seeds was not essential for seed germination under non-stress and mild stress conditions, but contributed highly to seedling growth under severe stress conditions. Therefore, seed priming with X. badius on ryegrass could be an effective approach to enhance plant tolerance against drought and salt stresses. X. badius could be a good candidate for the inoculation of ectomycorrhizal plants cultivation programs in mild saline and semiarid areas.
Introduction
Abiotic and biotic stresses influence plant growth, survival and productivity. Drought and high salinity are the two most important environmental factors that negatively affect seed germination, seedling growth and development, and ultimately influence crop yield, food quality and global food security. Application of stress tolerant plant growth promoting fungi (PGPF) may enhance crop seed germination, seedling establishment, plant growth, and productivity under adverse environmental conditions (de Zelicourt et al. 2013;Guerrero-Galán et al. 2019;Hossain et al. 2017;Kumar and Verma 2018;Tomer et al. 2016;Vijayabharathi et al. 2016;Vimal et al. 2017;Yan et al. 2019).
Mycorrhizal fungi are one of the commonly occurring microorganisms in soil, and more than 80% of land plants naturally establish mutualistic symbiotic relationships with these fungi (Bonfante and Genre 2010). Mycorrhizal fungi play an increasing vitally important role in host plants growth promotion, in inducing plant stress tolerance and agricultural sustainability under various environmental stress conditions (Behie and Bidochka 2014;Bonfante and Genre 2010;Courty et al. 2010;Garcia et al. 2016;Hossain et al. 2017;Javeria et al. 2017;Shen et al. 2018;Yan et al. 2019).
Open Access
*Correspondence: fchliu@126.com; mahlin@163.com † Binghua Liu and Xinghong Liu contributed equally to this paper 1 Shandong Academy of Forestry, 42, East Wenhua Road, Shandong 250014 Jinan, China Full list of author information is available at the end of the article Ectomycorrhizal (ECM) fungi, about 7000 to 10,000 species in the world, play a vital role in plants nutrient cycle by establishing mutual symbiosis with plants' roots (Becquer et al. 2019;Cairney 2012;Taylor and Alexander 2005). Application of the beneficial mycorrhizal fungi in agricultural practices promises to be a fundamental tool for sustainability of crop production (Owen et al. 2015;Prasad et al. 2016;Tomer et al. 2016). In order to develop controlled ectomycorrhization practices that are suitable for the inoculation of field plants and are efficient in promoting host plants' growth under specific environmental conditions, it is necessary to isolate potential ECM fungi and evaluate their biological, physiological and symbiotic characteristics, as well as the specificity that they have with certain hosts, under the controlled laboratory conditions.
Here, we investigated the effects of different pH levels, salt stress and drought stress on mycelial growth of ECM fungus Xerocomus badius (synonyms for Boletus badius and Imleria badia) (Species Fungorum 2019) in the tolerance test. Based on the findings from the tolerance test with X. badius and the verified mutualistic symbiosis between Lolium multiflorum Lam. and X. badius driven by seed inoculation (Liu et al. 2019), we propose that X. badius is expected to enhance stress tolerance of L. multiflorum Lam. under drought and salt stresses. Therefore, symbiotic tests were carried out to investigate the effect of seed-priming with the spore suspensions of X. badius on seed germination and seedling growth of L. multiflorum Lam. under different NaCl-induced salt stress and PEG-induced drought stress conditions. The general, objective of this study was (1) to evaluate the stress tolerance of X. badius under different pH values, salt concentrations and drought, that could be helpful in determining optimized protocols for the vegetative propagation under laboratory conditions, and to (2) verify the improvement effect of seed priming with fungus suspension on seed germination and seedling growth of L. multiflorum Lam. under drought and salt stressed conditions, that could have important implications for the use of these fungi as inoculants on agricultural crops.
Plant material, fungus strain and inoculum preparation
Seeds of L. multiflorum Lam. and ECM fungus X. badius (Preservation No. cfcc5946) were obtained from the Xinrui Seed Industry Limited Company and China Forestry Culture Collection Center, respectively. Fungus maintenance, incubation, inoculation, and seeds pretreatment followed the methods of Liu et al.. (Liu et al. 2019).
Effect of pH, salt, and drought stress on mycelial growth of X. badius
Three single-factor (pH, salt, or drought) experiments were performed. Five pH values, namely, 5.00, 6.00, 7.00, 8.00, and 9.00, were implemented to study the effect of pH on the mycelial growth of X. badius. Prior to sterilization, the pH level of the potato dextrose agar (PDA) medium was adjusted with an electronic pH meter (PHS-3C, INESA Ltd, Shanghai, China) by adding HCl (1.00 mol L − 1 ) or KOH (1.00 mol L − 1 ). Salt stress was imposed by adding 0.20% (w/v), 0.40% (w/v), 0.60% (w/v), and 0.80% (w/v) NaCl (corresponding to 34.22, 68.45, 102.67 and 136.89 mmol L − 1 ) to the PDA medium (pH = 6.50) before sterilization. X. badius growing at the absence of NaCl was used as the control. Drought stress was induced using 0.00% (w/v), 5.00% (w/v), 10.00% (w/v), 15.00% (w/v), and 20.00% (w/v) polyethylene glycol with a molecular weight of 6000 (PEG-6000) to adjust the water potential of the PDA medium (pH = 6.50) to approximately − 0.16, − 0.27, − 0.45, − 0.72, and − 1.07 MPa, respectively. As PEG reduces agar solidification, fungal isolates were grown in liquid medium (potato dextrose medium). To avoid submersion, a sterilized grit support was placed in the Petri dish with the liquid medium just covering the grit, and a fiber filter was placed on the grit with an inoculation on the filter.
All colonies were cultured in Petri dishes (diameter: 9.00 cm) filled with 10.00 mL of the modified culture medium as described above. Mycelial plugs with diameter of 5.00 mm were taken from the 7-day-old colony edge by using a sterilized mechanical puncher and transferred to the different tested media. At least six replicates were performed for each treatment. The inoculated Petri dishes were sealed with a strip of parafilm and maintained in the dark at 25.00 ± 1.00 °C and 60.00% relative humidity for 10 days in an incubator with constant humidity.
Effect of X. badius inoculation on seed germination and seedling growth of L. multiflorum Lam. under salt and drought conditions
For each treatment, 30 X. badius-inoculated or non-inoculated seeds of L. multiflorum Lam. were sown in each Petri dish (diameter: 9.00 cm) with two layers of humid filter paper covered at the bottom. Two days after sowing, salt and drought were applied to the X. badius-inoculated and non-inoculated seeds. Salt stress was applied by adding 0.00% (w/v), 0.40% (w/v), and 0.80% (w/v) NaCl (according to the preliminary experiment) in the sterilized deionized water. Drought was imposed by adding 0.00% (w/v), 10.00% (w/v), and 20.00% (w/v) PEG-6000 in the sterilized deionized water. All Petri dishes were placed in a random position on a shelf in the laboratory. The experiment lasted for 2 weeks, during which all seedlings were watered every other day with NaCl, PEG-6000 solution, or sterilized water (control) and supplied twice a week with sterilized half-strength Hoagland's solution (pH = 6.50) (Hoagland and Arnon 1950). In the meantime, the residual solution was poured out, and the filter papers were changed to avoid the effects of ion accumulation. To avoid edge effects, all Petri dishes were rotated weekly.
Measurements of colony diameter (CD) and colony average growth rate (CGR)
After 7 days of incubation, the CD in different media was measured in the perpendicular direction using a beveled straightedge. The average of two diameter measurements along the perpendicular axes was used to estimate the colony size during the incubation period. The CGR was determined as the average increase in diameter divided by the total number of incubation days.
Measurements of seed germination rate (GR), shoot height (SH), and seedling total fresh weight (FW)
One week after sowing, the cumulative number of germinated seeds in the different treatments was recorded, and the GR, which was defined as one hundred times the number of germinated seeds divided by the total number of seeds, was calculated. At the end of the experiment, the seedlings in the different treatments were harvested separately, washed in running tap water to remove the chemical substances, and divided into shoot and root portions. The SH and FW were measured.
Statistical analyses
The experiments were performed using a completely randomized design. All the measurements were conducted in sextuplicate at least. Data were presented as mean ± standard deviation. Statistical analysis was carried out using the SPSS-13.0 for Windows (Standard released version 13.0 for Windows; SPSS Inc., IL, USA). One-way analysis of variance (ANOVA) was used to evaluate the effects of different pH values, salt concentrations and drought on mycelial growth of X. badius. Two-way ANOVA was used to evaluate the effects of X. badius inoculation and salt or drought stress on seed germination and seedling growth of L. multiflorum Lam.. Tukey's honestly significant difference (HSD) post hoc test (P ≤ 0.05) was performed to test the existence of statistical differences for the same parameter among different treatments.
Effect of pH on mycelial growth
One-way ANOVA showed that the pH level of the medium had no significant influence on the mycelial growth of X. badius (P > 0. 05, Table 1). X. badius mycelium had the ability to grow well at a wide pH range of 5.00 ~ 9.00. After 7 days of incubation, X. badius cultured in the medium with pH 8.00 showed the largest CD (7.14 cm) and the highest CGR (1.43 cm day − 1 ), and the smallest CD (6.83 cm) and lowest CGR (1.37 cm day − 1 ) were observed in medium with pH 5.00. However, statistical analysis showed no significant difference (P > 0.05) in the CD and CGR among the media with different pH levels.
Effect of salt stress on mycelial growth
The NaCl concentration of the culture medium had significant negative effect on the mycelial growth of X. badius (P < 0.001, Table 2). Significant differences in the CD and CGR of X. badius were observed among the media with different NaCl concentrations (P ≤ 0.05, Table 2). X. badius in the control medium (without NaCl) grew best, as manifested by the largest CD (7.56 cm) and highest CGR (1.51 cm day − 1 ). By contrast, the mycelial growth of X. badius in the presence of NaCl was significantly inhibited and decreased with increasing NaCl concentration. X. badius in 0.80% NaCl medium showed the smallest CD (5.83 cm) and lowest CGR (1.17 cm day − 1 ).
Effect of X. badius inoculation on seed germination of L. multiflorum Lam. under salt and drought conditions
X. badius inoculation (P ≤ 0.001), salinity (P ≤ 0.001) and their interaction (P ≤ 0.05) had significant effects on the GR (Table 4). In comparison with the non-saline treatment, the GRs of both non-inoculated and X. badius-inoculated L. multiflorum Lam. seeds were decreased by the NaCl-induced salt stress, and the non-inoculated seeds showed larger decrease in GR than the X. badius-inoculated ones. Compared with the nonsaline condition, 0.40% and 0.80% NaCl induced 17.99% and 43.47% decrease in the GR of the non-inoculated seeds, respectively. The GRs of the X. badius-inoculated seeds decreased by 5.49 and 28.84% under 0.40% and 0.80% NaCl condition, respectively. Under non-saline condition, X. badius had no significant influence on the GR of L. multiflorum Lam., but the GR was enhanced by X. badius under 0.40% and 0.80% NaCl-induced saline conditions. Compared with the non-inoculated seeds, the GRs of the X. badius-inoculated seeds increased by 12.00% and 22.34% under 0.40% and 0.80% NaCl condition, respectively. Compared with the non-drought condition, the PEG-induced drought decreased the GRs of both noninoculated and X. badius-inoculated seeds, and the noninoculated seeds showed larger decrease in GR than the X. badius-inoculated ones (Table 5). Compared with the non-drought condition, 10.00% PEG-induced drought stress led to 24.31% and 9.23% decrease in the GRs of the non-inoculated and X. badius-inoculated seeds, respectively. Meanwhile, 20.00% PEG-induced drought stress led to 49.32% and 37.20% decrease in the GRs of the non-inoculated and X. badius-inoculated seeds, respectively. Under non-drought condition, X. badius had no significant influence on the GR of L. multiflorum Lam., but X. badius enhanced GR of L. multiflorum Lam. under 10.00% and 20.00% PEG-induced drought conditions. Compared with the non-inoculated seeds, the GRs of the X. badius-inoculated seeds increased by 19.51% and 23.48% under 10.00% and 20.00% PEG condition, respectively.
Effect of X. badius inoculation on seedling growth of L. multiflorum Lam. under salt and drought conditions
X. badius inoculation, salinity, and their interaction had significant effects on the SH and FW (Table 4). Compared with those under the non-saline condition, salt stress inhibited the growth and biomass accumulation of non-inoculated and X. badius-inoculated L. multiflorum Lam. seedlings, and the non-inoculated seedlings showed a larger decrease than the X. badius-inoculated ones (Fig. 1). Compared with those under the non-saline condition, under 0.40% NaCl condition, the SHs of the noninoculated and X. badius-inoculated seedlings decreased
Table 5 Effect of X. badius inoculation on seed germination and seedling growth of L. multiflorum Lam. under different PEG-6000-induced drought conditions
Data are presented as mean of six replicates ± standard deviation. Small letters in the same column show statistically significant differences among different drought stress treatments for the same parameter at P ≤ 0.05 based on Tukey's HSD post hoc test. *, ** and ***Significant at P ≤ 0.05, 0.01, and 0.001, respectively (Table 5). Compared with those under the non-drought condition, drought stress inhibited the growth and biomass accumulation of non-inoculated and X. badius-inoculated L. multiflorum Lam. seedlings, and the non-inoculated seedlings showed a larger decrease than the X. badiusinoculated ones (Fig. 2). Compared with those under the non-drought condition, the SHs of the non-inoculated and X. badius-inoculated seedlings decreased by 29.16% and 22.95%, respectively, under 10.00% PEG condition and by 40.31% and 37.36%, respectively, under 20.00% PEG condition. The FWs of the non-inoculated and X. badius-inoculated seedlings decreased by 29.75% and 25.45%, respectively, under10.00% PEG condition and by 50.90% and 46.04%, respectively, under 20.00% PEG condition. X. badius inoculation improved the SHs and FWs of the L. multiflorum Lam. seedlings under non-drought and drought stress conditions. Compared with those of the non-inoculated seedlings, the SHs of the X. badiusinoculated seedlings increased by 19.55%, 30.03%, and 29.15% under 0.00%, 10.0%, and 20.00% PEG-induced drought condition, respectively, and the FWs increased by 28.86%, 36.73%, and 41.61% under 0.00%, 10.00%, and 20.00% PEG-induced drought condition, respectively.
Effect of pH on mycelial growth
The pH level is one of the crucial factors affecting the mycorrhizal fungus growth and development mainly by influencing the nutrient availability of the culture medium (Daza et al. 2006;Lazarević et al. 2016;Xu et al. 2008). ECM fungi can grow under conditions from acidic to slight alkaline Siri-in et al. 2014), but each fungal species has its optimum pH level for mycelial growth (Lazarević et al. 2016). For example, the mycelium of Scleroderma sinnamariense can grow at a pH range of 2.00 ~ 9.00, with the optimal pH of 5.00 (Siri-in et al. 2014). Boletus edulis and Hebeloma sp. showed the largest CD at pH 5.00, and Laccaria bicolor and Laccaria deliciosus grew best at pH 6.00 . The optimum pH levels of the aforementioned fungi were lower than 6.00, suggesting a good adaption to acid conditions. However, fungal species, such as Amanita caesarea (Daza et al. 2006), Laccaria insulsus , and some pleosporalean fungi from saline areas (Qin et al. 2017), grow best at neutral or slightly alkaline conditions. X. badius was isolated from soils with a pH range of 6.50 ~ 7.50. The colony may grow well in a culture medium with a pH level similar to its natural soil environments. Therefore, the pH conditions of the soil from which the fungi are isolated should be considered to optimize the culture and propagation of the fungi in the laboratory and to improve the production of mycorrhizal plants in the nursery. The results indicated that the mycelium of X. badius could grow well at the wide pH range of 5.00 ~ 9.00 (Table 1). After 7 days of incubation, X. badius grown at pH 8.00 showed the largest CD and the highest CGR, and the smallest CD and the lowest CGR were observed at pH 5.00. However, no significant difference (P > 0.05) were found in the CDs and CGRs among the media with different pH values (Table 1). X. badius might present high resistance under alkaline conditions, and Fig. 2 Typical phenotype of L. multiflorum Lam. seedlings 2 weeks after inoculation or non-inoculation with X. badius under different PEG-6000-induced drought conditions this characteristic is typical of alkalophilic fungal species (Kulkarni et al. 2019).
Effect of salt stress on mycelial growth
Salt stress is one of the most important limiting factors in agriculture worldwide. The practical use of beneficial mycorrhizal fungi with high salt tolerance has been proved to be one of the most effective strategies to alleviate the adverse effects on crops in saline areas Kumar and Verma 2018). Salt-tolerance evaluation of mycorrhizal fungi in the laboratory could provide a useful theoretical reference for the selection of the proper fungal strain. In this study, X. badius was very sensitive to salt stress, although the mycelium also grew very well, that is consistent with observations on other fungi (Qin et al. 2017;Tang et al. 2009). Mycelial growth, as reflected by CD and CGR, was significantly inhibited with increasing NaCl concentration (P < 0.001, Table 2). X. badius in the non-saline medium grew best as manifested by the highest value in CD and CGR. X. badius in 0.80% NaCl medium showed the lowest value in CD and CGR, suggesting the worst growth performance (Table 2). Probably, X. badius had poor ability to absorb Na + and Cl − , and the accumulation of these redundant ions in the medium resulted in low water potential and then reduced the availability of nutrient and water for the fungi (Kumar and Verma 2018), thereby leading to the restriction of mycelial growth. Despite its salt sensitivity, X. badius could still grow and survive in 0.80% NaCl medium, suggesting that this species is more likely halotolerant but not halophilic.
However, in nature, soil salinity is caused not only by NaCl but also by magnesium, calcium, potassium, etc. (Chen et al. 2019). More future researches focused on the effect of natural soil salinity on the growth of mycelia and the host plant should be carried out, that have more realistic significance in the utilization of salinity soil.
Effect of drought stress on mycelial growth
Researches on the effect of PEG-induced drought stress on mycelial growth have been carried out with many ECM fungal strains (Navarro-Ródenas et al. 2011;Zhang et al. 2011;Zhu et al. 2008). In this study, the growth response of X. badius to drought stress induced by PEG-6000 was assessed. The results showed that 5.00% PEG-induced drought stress had no significant negative influence on the CD and CGR of X. badius (P > 0.05, Table 3). However, the mycelial growth of X. badius was significantly inhibited under 10.00% ~ 20.00% PEGinduced drought conditions as manifested by the significant decrease in CD and CGR (P ≤ 0.05, Table 3). Mycelial growth under water-controlled conditions could reflect the adaptability of fungus to dry soil and the ability of the fungus to enhance the drought resistance of its host plants (Duñabeitia et al. 2004). Also, host plants may influence the morphology and physiology of the fungus after mycorrhization (Zhang et al. 2011). Therefore, it is necessary to establish fungus-mycorrhiza-host plant symbiont and study the associating drought resistance prior to practical application.
Effect of seed priming with X. badius suspensions on seed germination and seedling growth of L. multiflorum Lam. under salt and drought conditions Drought and high salinity are the two most important environmental factors that adversely affect the seed germination of crops and the survival, growth, and productivity of plants. In recent years, seed biopriming with PGPF spore suspensions has been extensively proved to be beneficial for the seed germination and seedling growth of crops under non-stress and stress conditions (Bonfante and Genre 2010;de Zelicourt et al. 2013;Guerrero-Galán et al. 2019;Hossain et al. 2017;Javeria et al. 2017;Kumar and Verma 2018;Tomer et al. 2016;Vijayabharathi et al. 2016;Vimal et al. 2017;Yan et al. 2019). Based on the findings from the tolerance test of X. badius and the verified mutualistic symbiosis between L. multiflorum Lam. and X. badius driven by seed inoculation (Liu et al. 2019), the effect of seed priming with spore suspensions of X. badius on seed germination and seedling growth of L. multiflorum Lam. were investigated under different salt and drought conditions. The results indicated that seed priming with X. badius had no significant effect on the GR under non-stress condition (P > 0.05, Tables 4 and 5), that is consistent with our previous study (Liu et al. 2019) and studies on bromeliad (Leroy et al. 2019), barley and oat (Murphy et al. 2017) inoculated by other PGPF species. However, GR was significantly enhanced by seed priming with X. badius under drought and salt stress conditions. X. badius inoculation greatly improved the SH and FW of L. multiflorum Lam. seedlings under non-stress and drought/ salt stress conditions (Figs. 1 and 2). The improvement under stress conditions was markedly higher than that under non-stress conditions (P ≤ 0.05, Tables 4 and 5). Similar improvements in seed germination and seedling growth induced by mycorrhizal fungi inoculation with seeds have also been reported on Dendrobium officinale (Tan et al. 2014) and other epiphytic orchid species (Alghamdi 2019). The results also showed that X. badius inoculation led to earlier seed germination and greater survival of seedlings compared with the non-inoculated seeds under non-stress and stress conditions. Thus, fungal inoculation with seeds was not very essential for seed germination under non-stress and mild stress conditions but contributed highly to the survival and growth of the seedlings especially under severe stress conditions. The symbiotically associated fungi could promote the degradation of the cuticle cellulose of the seed resulting in the alleviated restriction of seed coat and then earlier germination. In addition, it can also produce many plant growth-promoting compounds such as phytohormones (gibberellins and indole acetic acid) and secondary metabolites, and enhance water and nutrient availability, which are conducive to seed germination and subsequent seedling growth (Behie and Bidochka 2014;Cairney 2012;Garcia et al. 2016;Hossain et al. 2017;Javeria et al. 2017;Owen et al. 2015;Shen et al. 2018).
In comparison with the non-stress condition, NaClinduced salt stress and PEG-induced drought stress decreased GR, SH, and FW of the non-inoculated and X. badius-inoculated seeds/seedlings, and the noninoculated seeds/seedlings showed larger decrease in these three parameters than the X. badius-inoculated ones (Figs. 1 and 2; Tables 4 and 5). The GRs, SHs, and FWs of both non-inoculated and X. badius-inoculated L. multiflorum Lam. seeds/seedlings decreased rapidly with the increase of NaCl and PEG concentrations, and PEG showed more negative effect than that of NaCl (Tables 4 and 5), which is in agreement with the results from previous studies (Murillo-Amador et al. 2002;Petrović et al. 2016). The inhibition by salt and drought stress on seed germination was mainly due to the limited water uptake by the seed, which caused the subsequent inhibition on the seedling growth. Probably, the accumulation of Na + and Cl − in the substrate could also result in the toxic effect on seed germination and seedling growth by creating an external osmotic potential (Zhang et al. 2010). Compared with that under the PEG solution, the osmotic potential difference caused by the ion accumulation in the NaCl solution can also induce the rapid water uptake of seed and thereby enough water content for earlier seed germination.
In conclusion, the experimental evidence of the ability of X. badius to adapt to a series of environmental stresses, including pH, salt stress, and drought stress, is presented. The results indicated that X. badius had a wide pH tolerance, especially high alkali tolerance, and might has good adaptation to alkali environments. Furthermore, seed priming with spore suspensions of X. badius was not essential to the seed germination of L. multiflorum Lam. under non-stress and mild stress conditions, but induced a beneficial effect on the subsequent seedling growth under severe salt and drought stress conditions. Hence, the successful establishment of X. badius on L. multiflorum Lam. seedlings under stressful conditions can be an effective approach to increase the plant tolerance to withstand environmental stresses.
|
2023-01-25T15:08:07.349Z
|
2021-01-07T00:00:00.000
|
{
"year": 2021,
"sha1": "4bda864a917ceca8c6de90b546eeb9a820728259",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1186/s13568-020-01172-7",
"oa_status": "GOLD",
"pdf_src": "SpringerNature",
"pdf_hash": "4bda864a917ceca8c6de90b546eeb9a820728259",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"extfieldsofstudy": []
}
|
19406691
|
pes2o/s2orc
|
v3-fos-license
|
Identification and characterisation of phenolic compounds extracted from Moroccan olive mill wastewater
Received 12 Dec., 2013 Accepted 21 Apr., 2014 (006240) 1 Transdisciplinary Team of Analytical Science for Sustainable Development, Faculty of Science and Technologies, Sultan Moulay Slimane University, Beni Mellal, Morocco 2 Laboratory of Biological Engineering, Faculty of Science and Technologies, Sultan Moulay Slimane University, Beni Mellal, Morocco, e-mail: inass.leouifoudi@gmail.com 3 Laboratory of Molecular Chemical and Natural Substances, Faculty of Science, Meknes, Morocco *Corresponding author Identification and characterisation of phenolic compounds extracted from Moroccan olive mill wastewater Inass LEOUIFOUDI1,2*, Abdelmajid ZYAD2, Ali AMECHROUQ3, Moulay Ali OUKERROU2, Hassan Ait MOUSE2, Mohamed MBARKI1
Introduction
The olive oil industry in the Mediterranean region rejects annually up to 8.4 million m 3 of OMWW, of which 250 000 m 3 are produced in Morocco (Ben Sassi et al., 2006).This has become a major environmental problem in the countries of this region.Tadla-Azilal is an important olive oil production region of Morocco and the resulting OMWW was directly discharged in soils without any treatment, thus causing a negative environmental impact (Ruiz-Rodriguez et al., 2010).OMWW is a mildly acidic, red-to-black coloured, liquid of high conductivity.It is particularly rich in organic matter and toxic fatty acids (Mekki et al., 2006;Sayadi et al., 2000).The high-molecular-weight polyphenols, similar in structure to lignin, give OMWW their characteristic brownish black colour (Assas et al.., 2002;D' Annibale et al., 2004).Moreover, the high pollution potential of this effluent is commonly attributed to its high phenolic content of monomeric phenols, toxic to plants, water and some microorganisms (Capasso et al., 1992).OMWW may contains up to 10 g of phenols per liter (D' Annibale et al., 1998) while in the European Union, the accepted maximum phenol concentration in wastewaters is 1 mg/L (Urban waste water treatment Directive 91/271/EEC) (European Commission, 1991).Therefore, environmental survey authorities encourage the producers to change their production systems.However, OMWW can be considered as a rich source of natural antioxidant phenolic compounds 100 fold concentrated than in olive oil (Lesage-Meessen et al., 2001).Its composition varies both qualitatively and quantitatively according to the olive variety, climate conditions, cultivation practices, the olive storage time, and the olive oil extraction process (Borja et al., 1997;Ergun Ergül et al., 2009;Fiorentino et al., 2003;Davies et al., 2004).Apart from water (83-92%), the main components of OMWW are phenolic compounds, sugars and organic acids.OMWW contains also valuable resources such as mineral nutrients, especially potassium, which could potentially be reused as a fertilizer (Aranda et al., 2007;Dermeche et al., 2013).Thus, the scientific community is still in search of effective processes for reducing these contaminants (Mantzavinos & Kalogerakis, 2005).In this way, several techniques have been used to recover phenolic compounds from olive by-products, including enzymatic preparation, solvent extraction, membrane separation, centrifugation, and chromatographic procedures (Dermeche et al., 2013).Solvent extraction is the most commonly employed technique to extract phenolic compounds, and ethyl acetate is the most effective solvent for the treatment of OMWW under acidic conditions (Allouche et al., 2004).Different processing practices such as biological, chemical and physical methods have been used for the valorisation of olive residues while the development of these processes is deterred by their expensive costs.Furthermore, the renewed interest in such natural products has been supported by advances in chromatographic and spectroscopic techniques that have greatly facilitated drug discovery from plants (Obied et al., 2005).
The aim of the present work was to identify and quantify the phenolic compounds, occurring in Moroccan OMWW originating from different areas, in view to understanding their molecular bioactivities of further applications such as antioxidant potentialities.
Middle Infrared spectroscopy analysis
FT-MIR spectra of duplicate samples were obtained using a Bruker Vector 22 spectrometer possessing an integrated Michelson interferometer and Opus 5.5 software.The measurements were obtained with crude samples deposited on an attenuated reflection cell equipped with a diamond crystal.The generated spectra showed wavenumbers ranging between 400 and 4000 cm -1 .
HPLC/ESI-MS analyses (High performance liquid chromatography/Electro-Spray Ionization-Mass Spectrometry)
HPLC-MS analyses were performed at 279 nm and 30°C using a RP C18 column (150 × 4.6) × 5 µm possessing a Thermo Fisher apparatus equipped by a Surveyor quaternary pump coupled to a PDA detector (diode array detector: 200-600 nm) and an LCQ Advantage (ESI) ion trap mass spectrometer (Thermo Finnigan, San Jose, CA).The injected volume was 20 µL.The mobile phase (0.5 mL/min) consisted of solvent A (water +0.05% Trifloroacetic acid) and solvent B (Acetonitrile +0.05% methanol).The six-step gradient was applied, for a total run time of 76 min, as follows: Starting from 80% solvent A and 20% solvent B increasing to 30% solvent B over 30 min, then isocratic for 10 min, increased to 30% solvent B over 10 min, to 40% over 30 min and to 20% solvent B over 2 min, and finally isocratic for 4 min.ESI ionization conditions were spray voltage 4 KV, capillary 350°C, 14 V. Pure nitrogen was the sheath gas and pure helium was the collusion gas.The full scan mass data m/z was obtained in negative mode and ranged from 100 to 2000 Da.
Antioxidant activity
The antioxidant activity was performed using the DPPH (2,2-diphenyl-1-picrylhydrazyl) radical scavenging assay.The test was carried out in a 96 well microtiter plate.The samples and positive control, vitamin C, were diluted with methanol to prepare the extract solutions equivalent to 200, 100, 50, 25, 12.5, 6.25, 3.125 µg of sample/ml concentrations.150 µl of 0.004% DPPH solution was pipetted into each well of 96 well plate followed by 8 µl of the sample solutions.The plates were incubated at 37 °C for 30 min and the absorbance was measured at 540 nm, using ELISA microtiter plate reader.The experiment was performed in triplicate and percentage of the scavenging activity was calculated using the formula given below.IC 50 (Inhibitory concentration) is the concentration of the sample required to scavenge 50% of DPPH free radicals (Equation 1).
where Ao is absorbance of the control and As is absorbance of the sample at 540 nm.
Statistical analysis
The experimental results were performed in triplicate and the data were expressed as means ± standard deviation.The comparison of the averages was made by Student test (STATISCA software).Differences are considered significant at p<5%.
Chemicals
All solvents and chemicals were of HPLC grade and were obtained from Sigma Chemical Co.Saint Quentin (France).
Samples
Four OMWW samples were obtained as a liquid byproduct (vegetation waters) by discontinuous three-phase olive processing mill and conserved at 4 °C.Samples were generated from Moroccan Picholine olives variety (Moroccan Picholine olive variety was identified and authenticated by Pr. A. Boulli, Department of Biology, Sultan Moulay Slimane University, and stored as a voucher specimen in the Faculty of Science and Technologies, Beni Mellal, Morocco).The studied samples were collected at the end of the olive harvest season (January to March 2010) -at the maturation stage of red-black olives from four areas of Tadla-Azilal region (central Morocco): Beni Mellal and Krazza (for plain areas samples), Azilal and Afourar (for mountainous areas samples).
Extraction of phenolic compounds
The procedure was carried out using the analytical methodology described by De Marco et al. ( 2007) with some modifications.The Samples were washed with hexane [1:1, (v/v)] in order to remove the lipid fraction: 45 ml of OMWW were mixed with 45 mL of hexane; the mixture was shaken and then centrifuged during 15 min at 4000 rpm.The phases were separated and the washing was repeated successively two times.Extraction of phenolic compounds was then carried out with ethyl acetate: 100 mL of OMWW samples, preventively washed, were mixed with 100 mL of ethyl acetate; the mixture was vigorously shaken and centrifuged for 10 min at 4000 rpm.The phases were separated and the extraction was repeated successively three times.The ethyl acetate was evaporated and the residue was stored at -20 °C for subsequent analyses.
Evaluation of the total phenolic compounds content
The total phenolic compounds content in each extract was evaluated by spectrophotometry using the Folin-Ciocalteu method (Singleton & Rossi, 1965;Scalbert et al., 1989) with some modifications.Briefly, 2.5 ml portion of Folin-Ciocalteu reagent 0.2 N was mixed with 0.5 ml of the sample.The reaction was kept in the dark for 5 min.Then, 2 ml of a sodium carbonate solution (75 g/l) was added to the mixture and the reaction was kept in the dark for 1 h.The absorbance was measured at 765 nm in Jasco V-630 spectrophotometer.Gallic acid was used as phenolic compound standard for calibration curve (10-90 mg/L; y = 0,0009 x -0,0004, where x and y represent gallic acid concentration (mg/L) and absorbance at 765 nm, respectively; r 2 = 0,9981).Contents of total phenolic compounds in OMWW were expressed as gallic acid equivalents in gram per liter (g GAE/L residue).
Total phenolic content
As can be seen in Table 1, the amount of total biophenols content varied according to the production area.The OMWW extracts originating from the mountainous areas have the highest amount of phenolic content compared to the extracts from the plain areas.
These results suggest the climate and the geographical conditions impact of the phenolic composition.Several factors converge to determine the amount of total phenolic compounds in OMWW, including the olive cultivar, soil composition, ripeness of the fruit, climate and agronomic conditions, storage conditions prior to extraction, and the processing techniques.These factors affect the phenol profile in olive fruit, so too will they affect the profile in olive residues (Dermeche et al., 2013;Allouche et al., 2004;Obied et al., 2005).
In this perspective, Boscaiu et al. (2010) have found a positive correlation between the degree of environmental stress and the level of phenolic compounds accumulated in the plants, suggesting a role of these secondary metabolites in the defence mechanisms against stress.However, Tadla Azilal region is characterized by a continental climate with an intense cold winter and a very hot summer.The temperature varies from 3°-4° C to 48°-50° C with large variations of rainfall.The mountainous areas are characterized by poor soils with large periods of water shortage compared to plain areas with irrigated rich soils (Juili et al., 2013), which may suggest an important degree of environmental stress mainly in mountainous areas.
Identification of phenolic compounds extracted from OMWW
The identification of biophenols was performed by comparing retention times on HPLC with literature and confirmed by relevant molecular mass data from LC-MS (data not shown).HPLC provided the separation of the major biophenols in the OMWW extracts as illustrated in Figure 2 for detection at 279 nm where the differences between mountainous and plain areas OMWW extracts are observed.Mountainous areas' extracts had higher levels of biophenols classes (68.01%; 68.81%) than plain areas' extracts (52.59%; 58.98%).The HPLC-MS analyses revealed the presence of a high amount of hydroxytyrosol, flavonoids and secoiridoid derivatives.In addition, verbascoside, nüzhenide and, mainly, a higher amount of polymeric substances (P) were also detected.The phenolic composition of OMWW extracts is summarized in Table 2.
The HPLC-MS analyses showed the presence of: Phenolic alcohols: The ion spectra of tyrosol (m/z 137) and hydroxytyrosol (m/z 153), identified as the main phenolic compounds detected in OMWW, were found in all the investigated samples.The concentration of total phenol content ranged from 13 to 22 % depending on the geographical area.They were characterized by their highest antioxidant activity, especially hydroxytyrosol, and some other biological effects such as antimicrobial and anti-inflammatory activities (Allouche et al., 2004;De Marco et al., 2007;Capasso et al., 2002;Suarez et al., 2009;Visioli et al., 2002).
FT-MIR Infrared Spectra analysis
The FT-MIR spectra analyses of all OMWW extracts showed similar spectroscopic profiles although they originate from different geographical locations.The FT-MIR spectra of two OMWW extracts show similar characteristic features with some differences in bands intensity (Figure 1).The OH-stretching at 3430 cm −1 from numerous sources (including water) is very similar and shows higher intensity (Ibarra, 1989).The band at 2900 cm −1 from aliphatic C-H stretching (with the band at 1450 cm −1 from single bond vibrations) shows important intensity and points to important aliphatic moieties.OMWW extract shows a distinct band at 1625 cm −1 , which is attributed to C-C bonds conjugated with C-O and COO− groups.In the region 1400-1000 cm −1 the OMWW extracts show smaller intensity of the peak at 1400 cm −1 corresponding to COO− vibrations but in the entire region 1300-1000 cm −1 they show much stronger intensity including the peak at 1260 cm −1 corresponding to C-O vibrations (Hoque, 1999).The detected functions presumably suggest the presence of the main constituents of olive mill wastewaters: fatty organic acids and phenolic compounds (Dermeche et al., 2013).(Dermeche et al., 2013;Obied et al., 2005;Suarez et al., 2009;Obied et al., 2007) a The Results are expressed as percentage of total phenols in OMMW extracts.b Mass charge value.nd = not detected.phenols) could be attributed to the naringenin.Nüzhenide, a compound identified by Silva et al. (2006) in the olive seed, was found at the ion m/z 685 at low quantities (below 2% of the total polyphenols).Both compounds exhibited a good antioxidant potential (Silva et al., 2006;McDonald et al., 2001).The MS analyses of mountainous extracts showed molecular ions at m/z 269, 431 and 577 suggesting the presence, respectively, of apigenin (0.38 % of the total phenols content) and its two derivatives: apigenin-7-glucoside and apigenin-7-rutinoside detected at variable amounts (Dermeche et al., 2013;Obied et al., 2005;Suarez et al., 2009).
Secoiridoids: Oleuropein, an ester of elenolic acid and hydroxytyrosol, identified as a major phenolic compound of OMWW (Dermeche et al., 2013;Visioli et al., 2002;Romero et al., 2002), was identified by its molecular ion at m/z 539 and by the presence of the hydroxytyrosol (m/z 153) ion fragment, only in the extracts derived from mountainous areas whereas we failed to identify this compound in the plain areas extracts.Oleuropein in these extracts had probably degraded into elenolic acid and hydroxytyrosol by an esterase during the mechanical olive oil extraction process (Visioli et al., 2002).This was probably the case with oleuropein in the studied samples, since abundant amounts in hydroxytyrosol were mainly recovered in olive oil residues.Another secoiridoid was identified at m/z 523, may correspond to the ligstroside produced by the loss of the glucose molecule (m/z 162) of nüzhenide and its characteristic fragment ion at m/z 335 (Suarez et al., 2009;De La Torre-Carbot et al., 2005).This compound was characterized by De Marco et al. ( 2007) as a well antioxidant phenolic compound in the Italian Phenolic acids: Vanillic acid, sinapic acid, syringic acid, caffeic acid, and p-coumaric acid, (peaks: 4, 5, 6, 7 and 8, respectively) that have been frequently reported in OMWW could be found in most of the investigated extracts.Vanillic acid (m/z167) and caffeic acid (m/z 179) were found with an important content in the extracts from mountainous areas (4.25 and 8.52 % of total phenols).p-coumaric acid (m/z 163) was detected only in the extracts from plain areas, and its amount was significantly higher (9.98% of total phenols).Furthermore, new phenolic acids, namely dihydroxymandelic acid (m/z 183), tetrahydroxymandelic acid (m/z 215), and 3,4,5 trimethoxybenzoic acid, characterized by Aranda et al. (2007) and Capasso et al. (1992), were also identified.
Flavonoids: a fragment ion with m/z 285, diagnostic of luteolin (peak 17), and its derivative; luteolin-7-glucoside (peak 16) with m/z 447 (Ryan et al., 2002) were identified in both mountainous and plain areas extracts as the major flavones (5.69 % and 6.25 %, respectively).According to the molecular ion at m/z 593, luteolin-7-rutinoside could be proposed only for mountainous area (3.35 % of total polyphenols).According to the literature data, the flavone luteolin-7-rutinoside was previously detected in olive leaves and olive by-products, and its ESI-MS data were similar to those of the peak 15 eluted before luteolin-7-glucoside (Dermeche et al., 2013;Ryan et al., 2002).The comparison of the ESI-MS data with the literature data allowed the identification of ion at m/z 609 attributed to rutin molecule.This compound has previously been detected in OMWW and olive pulp (Dermeche et al., 2013).Another flavanone identified at the ions at m/z 271 (0.23; 1.77 % of total (Obied et al., 2005;De Marco et al., 2007;De La Torre-Carbot et al., 2005;Suarez et al., 2009;Dermeche et al., 2013) Luteolin-7-glucoside 447 4.76 nd 6.25 nd (De Marco et al., 2007;Cardoso et al., 2005;Aranda et al., 2007;Dermeche et al., 2013;Romero et al., 2002) Luteolin-7-rutinoside 593 nd nd 3.35 0.9 (Obied et al., 2005;Cardoso et Moreover, the Moroccan OMWW phenolic fraction represents a complex emulsion containing mainly phenolic compounds as confirmed by Bianco et al. (2003), who identified 20 phenolic compounds in OMWW using HPLC-MS-MS including the classes of hydrophilic phenols as identified in Table 2.As detailed above, its concentration and composition vary from a region to another depending on several parameters.Mountainous areas offered the highest phenolic content compared to plain areas, suggesting the impact of environmental stress factors and the variability of the geographical and the climatic conditions of Tadla-Azilal region.The olive variety and its maturity and the methods used to extract and analyse the phenolic compounds could also explain the variability of phenols in OMWW.Accordingly, the identified phenolic compounds in OMWW as well as their concentration are highly variable from a study to another.Several previous investigations have identified other phenolic compounds in OMWW extracts.As an example, Visioli et al. (2002) reported that oleuropein is a major phenolic compound in OMWW, whereas we failed to identify this compound in plain areas as Allouche et al. (2004) did in Tunisian OMWW.One explanation could be that the OMWW was sampled late in the olive harvest (mature olives), when the oleuropein and ligstroside had already been degraded into hydroxytyrosol, tyrosol and various derivative ions, which explains their high amount quantified in OMWW extracts (De Marco et al., 2007;Capasso et al., 2002;Bianco et al., 2003).Juarez et al. (2008) have identified in Spanish OMWW: ferulic acid, p-coumaric acid, 3,4,5 trimethoxybenzoic acid, and p-hydroxybenzoic acid, as major compounds.Dermeche et al. (2013), have detected other compounds in Algerian OMWW: gallic acid, vanillic acid, cinammic acid, 4-methylcatechol, 4-hydroxybenzoic acid, protocatechuic acid, and 3,4-dihydroxyphenylacetic acid.In other studies, Lesage-Messen et al. (2001) and Bouzid et al. (2005) have identified in French and Tunisian OMWW extracts: vanillic acid, ferulic acid and vanillin as major polyphenols.In Italian OMWW, the major polyphenols identified by Capasso et al. (1992) and Casa et al. (2003) were catechol, 4-methylcatechol, 4-hydroxybenzoic acid and trans-sinamic acid.
Antioxidant activity on ethyl acetate phenolic extracts
Phenolic extracts were tested for their antioxidant activity (Table 3) using the stable free radical DPPH as test (Kumar et al., 2013).Mountainous extracts exhibited the highest antiradical potential with a value 3-fold greater than that of plain extracts.That could be attributed to the highest concentrations of antioxidant phenolic compounds, such as hydroxytyrosol, secoiridoids derivatives and flavonoids, quantified in large quantities of total phenols content.However, hydroxytyrosol proved, among the biophenols identified in olive mill waste, high antioxidant properties (Allouche et al., 2004;Obied et al., olive oil mill wastewater.ESI-MS data of both mountainous and plain OMWW extracts indicated a molecular ion at m/z 623 (peak 13) and its characteristic fragments, which is in accordance with the fragmentation of verbascoside.These results were corroborated with the fragmentation profile of verbascoside described by Ryan et al. (2002).Peak 19 showed a deprotonated molecule at m/z 685 that could be attributed to nüzhenide previously identified in olive seed (Silva et al., 2006).
Secoiridoids derivatives:
The spectra generated from plain OMWW extracts gave the deprotonated molecule at m/z 377 (peak 24) corresponding to the oleuropein aglycon (3,4-DHPEA-EA) (De Marco et al., 2007;Suarez et al., 2009).Another molecule identified as an oleuropein derivative shown by its deprotonated molecule [M-H] -at m/z 319 may be attributed to oleuropein aglycone isomer in aldehyde form (3,4-DHPEA-EDA) previously identified by Servili et al. (1999) in vegetation waters.The ion at m/z 241, corresponding to the elenolic acid fragment derivative of oleuropein, was identified only in mountainous areas with an important content (5.7 % of total phenols).Moreover, the spectra of all the extracts showed a product ion at m/z 315 (peak 1) with a higher content ranging from 4 to 13%, which could be attributed to hydroxytyrosol glucoside produced by the loss of a rhamnose unit of the verbascoside molecule (Ramos et al., 2013;Cardoso et al., 2005).More derivatives of ligstroside were detected both in mountainous and plain areas.The most abundant was the ligstroside aglycon at m/z 361 (3.19 % of total phenols) characterized by De La Torre Carbot et al. (2005) and Suarez et al. (2009).
Lignans:
The product ion spectra showed an m/z 357 and m/z 415 which can be attributed, to the Pinoresinol and 1-acetoxypinoresinol, respectively, were identified in most of the extracts with smaller quantities (below 2.32% of the total polyphenols).
Furthermore, important levels of molecules with molecular mass above 1000 Da (800-2000 Da) corresponding to the polymeric phenols were detected as a very broad peak in the range between 16 and 22 min, 32 and 38 min, and 54 and 70 min, co-eluting with the previously mentioned secoiridoids and flavonoid glycosides.We hypothesize that these compounds are the supposed polymerin and metal polymeric organic compounds that were previously recovered from olive oil mill waste waters and proved to be composed of polysaccharides, melanin, proteins and metals (Aranda et al., 2007;Dermeche et al., 2013;Capasso et al., 2002;Hamdi, 1993).
Nonetheless, in full scan mode several unidentified compounds were observed with a strong m/z whose MS fragmentation spectra indicated various product ions (data not shown).2005, 2007, 2008;De Marco et al., 2007;Visioli et al., 2002).
Conclusion
For the first time, the olive wastewater was tested for its phenolic composition according to geographical origin areas.This work confirms the interest of olive mill wastewaters as a source of natural antioxidant biophenols, especially those derived from mountainous cultivar olive areas characterized by their pesticide free olives production and their highest polyphenols content.The high antioxidant potential of the OMWW phenolic extracts was related to their high contents of hydroxytyrosol, secoiridoids derivatives, and flavonoids.It was concluded that OMWW was a promising antioxidant phenolic source of further potential biological properties such as food antioxidant agents.Also, the understanding of the molecular activities of these natural compounds can lead to develop new applications in the pharmaceutical and biomedical domains.
Table 1 .
Total phenolic content in OMWW extracts from four geographical areas of Tadla-Azilal region, expressed as gram of gallic acid equivalents per litre of OMWW extracts.
Table 2 .
Phenolic composition of Moroccan OMWW extracts analysed by HPLC-MS a .
Table 3 .
Scavenging effects (IC 50 µg/ml) of OMWW extracts on DPPH free radicals.Antiradical activity IC 50 (µg/ml) was defined as the concentration of extracts necessary to decrease the initial DPPH radical concentration by 50%.Values are expressed as the mean ± standard deviation (S.D.) of triplicate analyses.
a Positive control
|
2017-10-24T19:27:19.783Z
|
2014-06-01T00:00:00.000
|
{
"year": 2014,
"sha1": "f72e0d0896660257fe783ccb18fb61dc68893a35",
"oa_license": "CCBYNC",
"oa_url": "https://www.scielo.br/j/cta/a/4YWq5s988FbKzyH7bHjqjTt/?format=pdf&lang=en",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "5ff9dfd23b74513fa392a877ce94602fc22fe786",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Chemistry"
]
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.